00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2000 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3266 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.023 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.024 The recommended git tool is: git 00:00:00.024 using credential 00000000-0000-0000-0000-000000000002 00:00:00.026 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.040 Fetching changes from the remote Git repository 00:00:00.045 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.058 Using shallow fetch with depth 1 00:00:00.058 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.058 > git --version # timeout=10 00:00:00.071 > git --version # 'git version 2.39.2' 00:00:00.071 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.098 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.098 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.557 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.568 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.578 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:02.578 > git config core.sparsecheckout # timeout=10 00:00:02.587 > git read-tree -mu HEAD # timeout=10 00:00:02.602 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:02.618 Commit message: "inventory: add WCP3 to free inventory" 00:00:02.618 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:02.694 [Pipeline] Start of Pipeline 00:00:02.707 [Pipeline] library 00:00:02.709 Loading library shm_lib@master 00:00:02.709 Library shm_lib@master is cached. Copying from home. 00:00:02.725 [Pipeline] node 00:00:02.741 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:02.743 [Pipeline] { 00:00:02.751 [Pipeline] catchError 00:00:02.753 [Pipeline] { 00:00:02.762 [Pipeline] wrap 00:00:02.769 [Pipeline] { 00:00:02.774 [Pipeline] stage 00:00:02.775 [Pipeline] { (Prologue) 00:00:02.915 [Pipeline] sh 00:00:03.201 + logger -p user.info -t JENKINS-CI 00:00:03.220 [Pipeline] echo 00:00:03.222 Node: GP11 00:00:03.231 [Pipeline] sh 00:00:03.530 [Pipeline] setCustomBuildProperty 00:00:03.541 [Pipeline] echo 00:00:03.542 Cleanup processes 00:00:03.547 [Pipeline] sh 00:00:03.829 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.829 902040 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.843 [Pipeline] sh 00:00:04.124 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.124 ++ grep -v 'sudo pgrep' 00:00:04.124 ++ awk '{print $1}' 00:00:04.124 + sudo kill -9 00:00:04.124 + true 00:00:04.137 [Pipeline] cleanWs 00:00:04.144 [WS-CLEANUP] Deleting project workspace... 00:00:04.144 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.151 [WS-CLEANUP] done 00:00:04.154 [Pipeline] setCustomBuildProperty 00:00:04.163 [Pipeline] sh 00:00:04.440 + sudo git config --global --replace-all safe.directory '*' 00:00:04.547 [Pipeline] httpRequest 00:00:04.568 [Pipeline] echo 00:00:04.570 Sorcerer 10.211.164.101 is alive 00:00:04.576 [Pipeline] httpRequest 00:00:04.580 HttpMethod: GET 00:00:04.581 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.581 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.583 Response Code: HTTP/1.1 200 OK 00:00:04.584 Success: Status code 200 is in the accepted range: 200,404 00:00:04.584 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.144 [Pipeline] sh 00:00:05.429 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.443 [Pipeline] httpRequest 00:00:05.478 [Pipeline] echo 00:00:05.479 Sorcerer 10.211.164.101 is alive 00:00:05.485 [Pipeline] httpRequest 00:00:05.490 HttpMethod: GET 00:00:05.490 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:05.491 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:05.503 Response Code: HTTP/1.1 200 OK 00:00:05.503 Success: Status code 200 is in the accepted range: 200,404 00:00:05.504 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:13.049 [Pipeline] sh 00:01:13.356 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:15.901 [Pipeline] sh 00:01:16.181 + git -C spdk log --oneline -n5 00:01:16.181 719d03c6a sock/uring: only register net impl if supported 00:01:16.181 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:16.181 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:16.181 6c7c1f57e accel: add sequence outstanding stat 00:01:16.181 3bc8e6a26 accel: add utility to put task 00:01:16.200 [Pipeline] withCredentials 00:01:16.211 > git --version # timeout=10 00:01:16.221 > git --version # 'git version 2.39.2' 00:01:16.237 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:16.239 [Pipeline] { 00:01:16.248 [Pipeline] retry 00:01:16.250 [Pipeline] { 00:01:16.266 [Pipeline] sh 00:01:16.555 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:18.494 [Pipeline] } 00:01:18.518 [Pipeline] // retry 00:01:18.524 [Pipeline] } 00:01:18.545 [Pipeline] // withCredentials 00:01:18.556 [Pipeline] httpRequest 00:01:18.580 [Pipeline] echo 00:01:18.582 Sorcerer 10.211.164.101 is alive 00:01:18.589 [Pipeline] httpRequest 00:01:18.593 HttpMethod: GET 00:01:18.594 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:18.595 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:18.596 Response Code: HTTP/1.1 200 OK 00:01:18.597 Success: Status code 200 is in the accepted range: 200,404 00:01:18.597 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:25.869 [Pipeline] sh 00:01:26.148 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:28.057 [Pipeline] sh 00:01:28.337 + git -C dpdk log --oneline -n5 00:01:28.337 caf0f5d395 version: 22.11.4 00:01:28.337 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:28.337 dc9c799c7d vhost: fix missing spinlock unlock 00:01:28.337 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:28.337 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:28.348 [Pipeline] } 00:01:28.365 [Pipeline] // stage 00:01:28.374 [Pipeline] stage 00:01:28.376 [Pipeline] { (Prepare) 00:01:28.397 [Pipeline] writeFile 00:01:28.413 [Pipeline] sh 00:01:28.690 + logger -p user.info -t JENKINS-CI 00:01:28.704 [Pipeline] sh 00:01:28.985 + logger -p user.info -t JENKINS-CI 00:01:28.996 [Pipeline] sh 00:01:29.275 + cat autorun-spdk.conf 00:01:29.275 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.275 SPDK_TEST_NVMF=1 00:01:29.275 SPDK_TEST_NVME_CLI=1 00:01:29.275 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.275 SPDK_TEST_NVMF_NICS=e810 00:01:29.275 SPDK_TEST_VFIOUSER=1 00:01:29.275 SPDK_RUN_UBSAN=1 00:01:29.275 NET_TYPE=phy 00:01:29.275 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:29.275 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.281 RUN_NIGHTLY=1 00:01:29.287 [Pipeline] readFile 00:01:29.313 [Pipeline] withEnv 00:01:29.315 [Pipeline] { 00:01:29.327 [Pipeline] sh 00:01:29.612 + set -ex 00:01:29.612 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:29.612 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:29.612 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.612 ++ SPDK_TEST_NVMF=1 00:01:29.612 ++ SPDK_TEST_NVME_CLI=1 00:01:29.612 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.612 ++ SPDK_TEST_NVMF_NICS=e810 00:01:29.612 ++ SPDK_TEST_VFIOUSER=1 00:01:29.612 ++ SPDK_RUN_UBSAN=1 00:01:29.612 ++ NET_TYPE=phy 00:01:29.613 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:29.613 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.613 ++ RUN_NIGHTLY=1 00:01:29.613 + case $SPDK_TEST_NVMF_NICS in 00:01:29.613 + DRIVERS=ice 00:01:29.613 + [[ tcp == \r\d\m\a ]] 00:01:29.613 + [[ -n ice ]] 00:01:29.613 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:29.613 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:29.613 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:29.613 rmmod: ERROR: Module irdma is not currently loaded 00:01:29.613 rmmod: ERROR: Module i40iw is not currently loaded 00:01:29.613 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:29.613 + true 00:01:29.613 + for D in $DRIVERS 00:01:29.613 + sudo modprobe ice 00:01:29.613 + exit 0 00:01:29.622 [Pipeline] } 00:01:29.635 [Pipeline] // withEnv 00:01:29.640 [Pipeline] } 00:01:29.655 [Pipeline] // stage 00:01:29.665 [Pipeline] catchError 00:01:29.666 [Pipeline] { 00:01:29.683 [Pipeline] timeout 00:01:29.683 Timeout set to expire in 50 min 00:01:29.685 [Pipeline] { 00:01:29.702 [Pipeline] stage 00:01:29.704 [Pipeline] { (Tests) 00:01:29.722 [Pipeline] sh 00:01:30.073 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.073 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.073 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.073 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:30.073 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:30.073 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:30.073 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:30.073 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:30.073 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:30.073 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:30.073 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:30.073 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.073 + source /etc/os-release 00:01:30.073 ++ NAME='Fedora Linux' 00:01:30.073 ++ VERSION='38 (Cloud Edition)' 00:01:30.073 ++ ID=fedora 00:01:30.073 ++ VERSION_ID=38 00:01:30.073 ++ VERSION_CODENAME= 00:01:30.073 ++ PLATFORM_ID=platform:f38 00:01:30.073 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:30.073 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:30.073 ++ LOGO=fedora-logo-icon 00:01:30.073 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:30.073 ++ HOME_URL=https://fedoraproject.org/ 00:01:30.073 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:30.073 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:30.073 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:30.073 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:30.073 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:30.073 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:30.073 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:30.073 ++ SUPPORT_END=2024-05-14 00:01:30.073 ++ VARIANT='Cloud Edition' 00:01:30.073 ++ VARIANT_ID=cloud 00:01:30.073 + uname -a 00:01:30.073 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:30.073 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:31.013 Hugepages 00:01:31.013 node hugesize free / total 00:01:31.013 node0 1048576kB 0 / 0 00:01:31.013 node0 2048kB 0 / 0 00:01:31.013 node1 1048576kB 0 / 0 00:01:31.013 node1 2048kB 0 / 0 00:01:31.013 00:01:31.013 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:31.013 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:31.013 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:31.013 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:31.013 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:31.013 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:31.013 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:31.013 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:31.013 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:31.013 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:31.013 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:31.013 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:31.013 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:31.013 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:31.013 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:31.013 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:31.013 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:31.013 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:31.013 + rm -f /tmp/spdk-ld-path 00:01:31.013 + source autorun-spdk.conf 00:01:31.013 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.013 ++ SPDK_TEST_NVMF=1 00:01:31.013 ++ SPDK_TEST_NVME_CLI=1 00:01:31.013 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.013 ++ SPDK_TEST_NVMF_NICS=e810 00:01:31.013 ++ SPDK_TEST_VFIOUSER=1 00:01:31.013 ++ SPDK_RUN_UBSAN=1 00:01:31.013 ++ NET_TYPE=phy 00:01:31.014 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:31.014 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.014 ++ RUN_NIGHTLY=1 00:01:31.014 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:31.014 + [[ -n '' ]] 00:01:31.014 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:31.273 + for M in /var/spdk/build-*-manifest.txt 00:01:31.273 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:31.273 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:31.273 + for M in /var/spdk/build-*-manifest.txt 00:01:31.273 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:31.273 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:31.273 ++ uname 00:01:31.273 + [[ Linux == \L\i\n\u\x ]] 00:01:31.273 + sudo dmesg -T 00:01:31.273 + sudo dmesg --clear 00:01:31.273 + dmesg_pid=902748 00:01:31.273 + [[ Fedora Linux == FreeBSD ]] 00:01:31.273 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:31.273 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:31.273 + sudo dmesg -Tw 00:01:31.273 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:31.273 + [[ -x /usr/src/fio-static/fio ]] 00:01:31.273 + export FIO_BIN=/usr/src/fio-static/fio 00:01:31.273 + FIO_BIN=/usr/src/fio-static/fio 00:01:31.273 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:31.273 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:31.273 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:31.273 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:31.273 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:31.273 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:31.273 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:31.273 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:31.273 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:31.273 Test configuration: 00:01:31.273 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.273 SPDK_TEST_NVMF=1 00:01:31.273 SPDK_TEST_NVME_CLI=1 00:01:31.273 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.273 SPDK_TEST_NVMF_NICS=e810 00:01:31.273 SPDK_TEST_VFIOUSER=1 00:01:31.273 SPDK_RUN_UBSAN=1 00:01:31.273 NET_TYPE=phy 00:01:31.273 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:31.273 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.273 RUN_NIGHTLY=1 00:47:20 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:31.273 00:47:20 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:31.273 00:47:20 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:31.273 00:47:20 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:31.273 00:47:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.273 00:47:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.273 00:47:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.273 00:47:20 -- paths/export.sh@5 -- $ export PATH 00:01:31.273 00:47:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.273 00:47:20 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:31.273 00:47:20 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:31.273 00:47:20 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720910840.XXXXXX 00:01:31.273 00:47:20 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720910840.NrVVQW 00:01:31.273 00:47:20 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:31.273 00:47:20 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:01:31.273 00:47:20 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:31.273 00:47:20 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:31.273 00:47:20 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:31.273 00:47:20 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:31.273 00:47:20 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:31.273 00:47:20 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:31.273 00:47:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.273 00:47:20 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:31.273 00:47:20 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:31.273 00:47:20 -- pm/common@17 -- $ local monitor 00:01:31.273 00:47:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.273 00:47:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.273 00:47:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.273 00:47:20 -- pm/common@21 -- $ date +%s 00:01:31.273 00:47:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.273 00:47:20 -- pm/common@21 -- $ date +%s 00:01:31.273 00:47:20 -- pm/common@25 -- $ sleep 1 00:01:31.273 00:47:20 -- pm/common@21 -- $ date +%s 00:01:31.273 00:47:20 -- pm/common@21 -- $ date +%s 00:01:31.274 00:47:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720910840 00:01:31.274 00:47:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720910840 00:01:31.274 00:47:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720910840 00:01:31.274 00:47:20 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720910840 00:01:31.274 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720910840_collect-vmstat.pm.log 00:01:31.274 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720910840_collect-cpu-load.pm.log 00:01:31.274 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720910840_collect-cpu-temp.pm.log 00:01:31.274 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720910840_collect-bmc-pm.bmc.pm.log 00:01:32.209 00:47:21 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:32.209 00:47:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:32.209 00:47:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:32.209 00:47:21 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.209 00:47:21 -- spdk/autobuild.sh@16 -- $ date -u 00:01:32.209 Sat Jul 13 10:47:21 PM UTC 2024 00:01:32.209 00:47:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:32.209 v24.09-pre-202-g719d03c6a 00:01:32.209 00:47:21 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:32.209 00:47:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:32.209 00:47:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:32.209 00:47:21 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:32.209 00:47:21 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:32.209 00:47:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.209 ************************************ 00:01:32.209 START TEST ubsan 00:01:32.209 ************************************ 00:01:32.209 00:47:21 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:32.209 using ubsan 00:01:32.209 00:01:32.209 real 0m0.000s 00:01:32.209 user 0m0.000s 00:01:32.209 sys 0m0.000s 00:01:32.209 00:47:21 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:32.209 00:47:21 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:32.209 ************************************ 00:01:32.209 END TEST ubsan 00:01:32.209 ************************************ 00:01:32.502 00:47:21 -- common/autotest_common.sh@1142 -- $ return 0 00:01:32.502 00:47:21 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:32.502 00:47:21 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:32.502 00:47:21 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:32.502 00:47:21 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:32.502 00:47:21 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:32.502 00:47:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.502 ************************************ 00:01:32.502 START TEST build_native_dpdk 00:01:32.502 ************************************ 00:01:32.502 00:47:21 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:32.502 caf0f5d395 version: 22.11.4 00:01:32.502 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:32.502 dc9c799c7d vhost: fix missing spinlock unlock 00:01:32.502 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:32.502 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:32.502 00:47:21 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:32.503 00:47:21 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:32.503 00:47:21 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:32.503 00:47:21 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:32.503 00:47:21 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:32.503 00:47:21 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:32.503 00:47:21 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:32.503 00:47:21 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:32.503 00:47:21 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:32.503 00:47:21 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:32.503 00:47:21 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:32.503 00:47:21 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:32.503 patching file config/rte_config.h 00:01:32.503 Hunk #1 succeeded at 60 (offset 1 line). 00:01:32.503 00:47:21 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:32.503 00:47:21 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:32.503 00:47:21 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:32.503 00:47:21 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:32.503 00:47:21 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:36.700 The Meson build system 00:01:36.700 Version: 1.3.1 00:01:36.700 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:36.700 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:36.700 Build type: native build 00:01:36.700 Program cat found: YES (/usr/bin/cat) 00:01:36.700 Project name: DPDK 00:01:36.700 Project version: 22.11.4 00:01:36.700 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:36.700 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:36.700 Host machine cpu family: x86_64 00:01:36.700 Host machine cpu: x86_64 00:01:36.700 Message: ## Building in Developer Mode ## 00:01:36.700 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:36.700 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:36.700 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:36.700 Program objdump found: YES (/usr/bin/objdump) 00:01:36.700 Program python3 found: YES (/usr/bin/python3) 00:01:36.700 Program cat found: YES (/usr/bin/cat) 00:01:36.700 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:36.700 Checking for size of "void *" : 8 00:01:36.700 Checking for size of "void *" : 8 (cached) 00:01:36.700 Library m found: YES 00:01:36.700 Library numa found: YES 00:01:36.700 Has header "numaif.h" : YES 00:01:36.700 Library fdt found: NO 00:01:36.700 Library execinfo found: NO 00:01:36.700 Has header "execinfo.h" : YES 00:01:36.700 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:36.700 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:36.700 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:36.700 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:36.700 Run-time dependency openssl found: YES 3.0.9 00:01:36.700 Run-time dependency libpcap found: YES 1.10.4 00:01:36.700 Has header "pcap.h" with dependency libpcap: YES 00:01:36.700 Compiler for C supports arguments -Wcast-qual: YES 00:01:36.700 Compiler for C supports arguments -Wdeprecated: YES 00:01:36.700 Compiler for C supports arguments -Wformat: YES 00:01:36.700 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:36.700 Compiler for C supports arguments -Wformat-security: NO 00:01:36.700 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:36.700 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:36.700 Compiler for C supports arguments -Wnested-externs: YES 00:01:36.700 Compiler for C supports arguments -Wold-style-definition: YES 00:01:36.700 Compiler for C supports arguments -Wpointer-arith: YES 00:01:36.700 Compiler for C supports arguments -Wsign-compare: YES 00:01:36.700 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:36.700 Compiler for C supports arguments -Wundef: YES 00:01:36.700 Compiler for C supports arguments -Wwrite-strings: YES 00:01:36.700 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:36.700 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:36.700 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:36.700 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:36.700 Compiler for C supports arguments -mavx512f: YES 00:01:36.700 Checking if "AVX512 checking" compiles: YES 00:01:36.700 Fetching value of define "__SSE4_2__" : 1 00:01:36.700 Fetching value of define "__AES__" : 1 00:01:36.700 Fetching value of define "__AVX__" : 1 00:01:36.700 Fetching value of define "__AVX2__" : (undefined) 00:01:36.700 Fetching value of define "__AVX512BW__" : (undefined) 00:01:36.700 Fetching value of define "__AVX512CD__" : (undefined) 00:01:36.700 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:36.700 Fetching value of define "__AVX512F__" : (undefined) 00:01:36.700 Fetching value of define "__AVX512VL__" : (undefined) 00:01:36.700 Fetching value of define "__PCLMUL__" : 1 00:01:36.700 Fetching value of define "__RDRND__" : 1 00:01:36.700 Fetching value of define "__RDSEED__" : (undefined) 00:01:36.700 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:36.700 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:36.700 Message: lib/kvargs: Defining dependency "kvargs" 00:01:36.700 Message: lib/telemetry: Defining dependency "telemetry" 00:01:36.700 Checking for function "getentropy" : YES 00:01:36.700 Message: lib/eal: Defining dependency "eal" 00:01:36.700 Message: lib/ring: Defining dependency "ring" 00:01:36.700 Message: lib/rcu: Defining dependency "rcu" 00:01:36.700 Message: lib/mempool: Defining dependency "mempool" 00:01:36.700 Message: lib/mbuf: Defining dependency "mbuf" 00:01:36.700 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:36.700 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:36.700 Compiler for C supports arguments -mpclmul: YES 00:01:36.700 Compiler for C supports arguments -maes: YES 00:01:36.700 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:36.700 Compiler for C supports arguments -mavx512bw: YES 00:01:36.700 Compiler for C supports arguments -mavx512dq: YES 00:01:36.700 Compiler for C supports arguments -mavx512vl: YES 00:01:36.700 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:36.700 Compiler for C supports arguments -mavx2: YES 00:01:36.700 Compiler for C supports arguments -mavx: YES 00:01:36.700 Message: lib/net: Defining dependency "net" 00:01:36.700 Message: lib/meter: Defining dependency "meter" 00:01:36.700 Message: lib/ethdev: Defining dependency "ethdev" 00:01:36.700 Message: lib/pci: Defining dependency "pci" 00:01:36.700 Message: lib/cmdline: Defining dependency "cmdline" 00:01:36.700 Message: lib/metrics: Defining dependency "metrics" 00:01:36.700 Message: lib/hash: Defining dependency "hash" 00:01:36.700 Message: lib/timer: Defining dependency "timer" 00:01:36.700 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:36.700 Compiler for C supports arguments -mavx2: YES (cached) 00:01:36.700 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:36.700 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:36.701 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:36.701 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:36.701 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:36.701 Message: lib/acl: Defining dependency "acl" 00:01:36.701 Message: lib/bbdev: Defining dependency "bbdev" 00:01:36.701 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:36.701 Run-time dependency libelf found: YES 0.190 00:01:36.701 Message: lib/bpf: Defining dependency "bpf" 00:01:36.701 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:36.701 Message: lib/compressdev: Defining dependency "compressdev" 00:01:36.701 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:36.701 Message: lib/distributor: Defining dependency "distributor" 00:01:36.701 Message: lib/efd: Defining dependency "efd" 00:01:36.701 Message: lib/eventdev: Defining dependency "eventdev" 00:01:36.701 Message: lib/gpudev: Defining dependency "gpudev" 00:01:36.701 Message: lib/gro: Defining dependency "gro" 00:01:36.701 Message: lib/gso: Defining dependency "gso" 00:01:36.701 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:36.701 Message: lib/jobstats: Defining dependency "jobstats" 00:01:36.701 Message: lib/latencystats: Defining dependency "latencystats" 00:01:36.701 Message: lib/lpm: Defining dependency "lpm" 00:01:36.701 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:36.701 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:36.701 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:36.701 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:36.701 Message: lib/member: Defining dependency "member" 00:01:36.701 Message: lib/pcapng: Defining dependency "pcapng" 00:01:36.701 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:36.701 Message: lib/power: Defining dependency "power" 00:01:36.701 Message: lib/rawdev: Defining dependency "rawdev" 00:01:36.701 Message: lib/regexdev: Defining dependency "regexdev" 00:01:36.701 Message: lib/dmadev: Defining dependency "dmadev" 00:01:36.701 Message: lib/rib: Defining dependency "rib" 00:01:36.701 Message: lib/reorder: Defining dependency "reorder" 00:01:36.701 Message: lib/sched: Defining dependency "sched" 00:01:36.701 Message: lib/security: Defining dependency "security" 00:01:36.701 Message: lib/stack: Defining dependency "stack" 00:01:36.701 Has header "linux/userfaultfd.h" : YES 00:01:36.701 Message: lib/vhost: Defining dependency "vhost" 00:01:36.701 Message: lib/ipsec: Defining dependency "ipsec" 00:01:36.701 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:36.701 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:36.701 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:36.701 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:36.701 Message: lib/fib: Defining dependency "fib" 00:01:36.701 Message: lib/port: Defining dependency "port" 00:01:36.701 Message: lib/pdump: Defining dependency "pdump" 00:01:36.701 Message: lib/table: Defining dependency "table" 00:01:36.701 Message: lib/pipeline: Defining dependency "pipeline" 00:01:36.701 Message: lib/graph: Defining dependency "graph" 00:01:36.701 Message: lib/node: Defining dependency "node" 00:01:36.701 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:36.701 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:36.701 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:36.701 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:36.701 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:36.701 Compiler for C supports arguments -Wno-unused-value: YES 00:01:37.642 Compiler for C supports arguments -Wno-format: YES 00:01:37.642 Compiler for C supports arguments -Wno-format-security: YES 00:01:37.642 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:37.642 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:37.642 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:37.642 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:37.642 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:37.642 Compiler for C supports arguments -mavx2: YES (cached) 00:01:37.642 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:37.642 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:37.642 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:37.642 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:37.642 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:37.642 Program doxygen found: YES (/usr/bin/doxygen) 00:01:37.642 Configuring doxy-api.conf using configuration 00:01:37.642 Program sphinx-build found: NO 00:01:37.642 Configuring rte_build_config.h using configuration 00:01:37.642 Message: 00:01:37.642 ================= 00:01:37.642 Applications Enabled 00:01:37.642 ================= 00:01:37.642 00:01:37.642 apps: 00:01:37.642 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:37.642 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:37.642 test-security-perf, 00:01:37.642 00:01:37.642 Message: 00:01:37.642 ================= 00:01:37.642 Libraries Enabled 00:01:37.642 ================= 00:01:37.642 00:01:37.642 libs: 00:01:37.642 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:37.642 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:37.642 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:37.642 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:37.642 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:37.642 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:37.642 table, pipeline, graph, node, 00:01:37.642 00:01:37.642 Message: 00:01:37.642 =============== 00:01:37.642 Drivers Enabled 00:01:37.643 =============== 00:01:37.643 00:01:37.643 common: 00:01:37.643 00:01:37.643 bus: 00:01:37.643 pci, vdev, 00:01:37.643 mempool: 00:01:37.643 ring, 00:01:37.643 dma: 00:01:37.643 00:01:37.643 net: 00:01:37.643 i40e, 00:01:37.643 raw: 00:01:37.643 00:01:37.643 crypto: 00:01:37.643 00:01:37.643 compress: 00:01:37.643 00:01:37.643 regex: 00:01:37.643 00:01:37.643 vdpa: 00:01:37.643 00:01:37.643 event: 00:01:37.643 00:01:37.643 baseband: 00:01:37.643 00:01:37.643 gpu: 00:01:37.643 00:01:37.643 00:01:37.643 Message: 00:01:37.643 ================= 00:01:37.643 Content Skipped 00:01:37.643 ================= 00:01:37.643 00:01:37.643 apps: 00:01:37.643 00:01:37.643 libs: 00:01:37.643 kni: explicitly disabled via build config (deprecated lib) 00:01:37.643 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:37.643 00:01:37.643 drivers: 00:01:37.643 common/cpt: not in enabled drivers build config 00:01:37.643 common/dpaax: not in enabled drivers build config 00:01:37.643 common/iavf: not in enabled drivers build config 00:01:37.643 common/idpf: not in enabled drivers build config 00:01:37.643 common/mvep: not in enabled drivers build config 00:01:37.643 common/octeontx: not in enabled drivers build config 00:01:37.643 bus/auxiliary: not in enabled drivers build config 00:01:37.643 bus/dpaa: not in enabled drivers build config 00:01:37.643 bus/fslmc: not in enabled drivers build config 00:01:37.643 bus/ifpga: not in enabled drivers build config 00:01:37.643 bus/vmbus: not in enabled drivers build config 00:01:37.643 common/cnxk: not in enabled drivers build config 00:01:37.643 common/mlx5: not in enabled drivers build config 00:01:37.643 common/qat: not in enabled drivers build config 00:01:37.643 common/sfc_efx: not in enabled drivers build config 00:01:37.643 mempool/bucket: not in enabled drivers build config 00:01:37.643 mempool/cnxk: not in enabled drivers build config 00:01:37.643 mempool/dpaa: not in enabled drivers build config 00:01:37.643 mempool/dpaa2: not in enabled drivers build config 00:01:37.643 mempool/octeontx: not in enabled drivers build config 00:01:37.643 mempool/stack: not in enabled drivers build config 00:01:37.643 dma/cnxk: not in enabled drivers build config 00:01:37.643 dma/dpaa: not in enabled drivers build config 00:01:37.643 dma/dpaa2: not in enabled drivers build config 00:01:37.643 dma/hisilicon: not in enabled drivers build config 00:01:37.643 dma/idxd: not in enabled drivers build config 00:01:37.643 dma/ioat: not in enabled drivers build config 00:01:37.643 dma/skeleton: not in enabled drivers build config 00:01:37.643 net/af_packet: not in enabled drivers build config 00:01:37.643 net/af_xdp: not in enabled drivers build config 00:01:37.643 net/ark: not in enabled drivers build config 00:01:37.643 net/atlantic: not in enabled drivers build config 00:01:37.643 net/avp: not in enabled drivers build config 00:01:37.643 net/axgbe: not in enabled drivers build config 00:01:37.643 net/bnx2x: not in enabled drivers build config 00:01:37.643 net/bnxt: not in enabled drivers build config 00:01:37.643 net/bonding: not in enabled drivers build config 00:01:37.643 net/cnxk: not in enabled drivers build config 00:01:37.643 net/cxgbe: not in enabled drivers build config 00:01:37.643 net/dpaa: not in enabled drivers build config 00:01:37.643 net/dpaa2: not in enabled drivers build config 00:01:37.643 net/e1000: not in enabled drivers build config 00:01:37.643 net/ena: not in enabled drivers build config 00:01:37.643 net/enetc: not in enabled drivers build config 00:01:37.643 net/enetfec: not in enabled drivers build config 00:01:37.643 net/enic: not in enabled drivers build config 00:01:37.643 net/failsafe: not in enabled drivers build config 00:01:37.643 net/fm10k: not in enabled drivers build config 00:01:37.643 net/gve: not in enabled drivers build config 00:01:37.643 net/hinic: not in enabled drivers build config 00:01:37.643 net/hns3: not in enabled drivers build config 00:01:37.643 net/iavf: not in enabled drivers build config 00:01:37.643 net/ice: not in enabled drivers build config 00:01:37.643 net/idpf: not in enabled drivers build config 00:01:37.643 net/igc: not in enabled drivers build config 00:01:37.643 net/ionic: not in enabled drivers build config 00:01:37.643 net/ipn3ke: not in enabled drivers build config 00:01:37.643 net/ixgbe: not in enabled drivers build config 00:01:37.643 net/kni: not in enabled drivers build config 00:01:37.643 net/liquidio: not in enabled drivers build config 00:01:37.643 net/mana: not in enabled drivers build config 00:01:37.643 net/memif: not in enabled drivers build config 00:01:37.643 net/mlx4: not in enabled drivers build config 00:01:37.643 net/mlx5: not in enabled drivers build config 00:01:37.643 net/mvneta: not in enabled drivers build config 00:01:37.643 net/mvpp2: not in enabled drivers build config 00:01:37.643 net/netvsc: not in enabled drivers build config 00:01:37.643 net/nfb: not in enabled drivers build config 00:01:37.643 net/nfp: not in enabled drivers build config 00:01:37.643 net/ngbe: not in enabled drivers build config 00:01:37.643 net/null: not in enabled drivers build config 00:01:37.643 net/octeontx: not in enabled drivers build config 00:01:37.643 net/octeon_ep: not in enabled drivers build config 00:01:37.643 net/pcap: not in enabled drivers build config 00:01:37.643 net/pfe: not in enabled drivers build config 00:01:37.643 net/qede: not in enabled drivers build config 00:01:37.643 net/ring: not in enabled drivers build config 00:01:37.643 net/sfc: not in enabled drivers build config 00:01:37.643 net/softnic: not in enabled drivers build config 00:01:37.643 net/tap: not in enabled drivers build config 00:01:37.643 net/thunderx: not in enabled drivers build config 00:01:37.643 net/txgbe: not in enabled drivers build config 00:01:37.643 net/vdev_netvsc: not in enabled drivers build config 00:01:37.643 net/vhost: not in enabled drivers build config 00:01:37.643 net/virtio: not in enabled drivers build config 00:01:37.643 net/vmxnet3: not in enabled drivers build config 00:01:37.643 raw/cnxk_bphy: not in enabled drivers build config 00:01:37.643 raw/cnxk_gpio: not in enabled drivers build config 00:01:37.643 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:37.643 raw/ifpga: not in enabled drivers build config 00:01:37.643 raw/ntb: not in enabled drivers build config 00:01:37.643 raw/skeleton: not in enabled drivers build config 00:01:37.643 crypto/armv8: not in enabled drivers build config 00:01:37.643 crypto/bcmfs: not in enabled drivers build config 00:01:37.643 crypto/caam_jr: not in enabled drivers build config 00:01:37.643 crypto/ccp: not in enabled drivers build config 00:01:37.643 crypto/cnxk: not in enabled drivers build config 00:01:37.643 crypto/dpaa_sec: not in enabled drivers build config 00:01:37.643 crypto/dpaa2_sec: not in enabled drivers build config 00:01:37.643 crypto/ipsec_mb: not in enabled drivers build config 00:01:37.643 crypto/mlx5: not in enabled drivers build config 00:01:37.643 crypto/mvsam: not in enabled drivers build config 00:01:37.643 crypto/nitrox: not in enabled drivers build config 00:01:37.643 crypto/null: not in enabled drivers build config 00:01:37.643 crypto/octeontx: not in enabled drivers build config 00:01:37.643 crypto/openssl: not in enabled drivers build config 00:01:37.643 crypto/scheduler: not in enabled drivers build config 00:01:37.643 crypto/uadk: not in enabled drivers build config 00:01:37.643 crypto/virtio: not in enabled drivers build config 00:01:37.643 compress/isal: not in enabled drivers build config 00:01:37.643 compress/mlx5: not in enabled drivers build config 00:01:37.643 compress/octeontx: not in enabled drivers build config 00:01:37.643 compress/zlib: not in enabled drivers build config 00:01:37.643 regex/mlx5: not in enabled drivers build config 00:01:37.643 regex/cn9k: not in enabled drivers build config 00:01:37.643 vdpa/ifc: not in enabled drivers build config 00:01:37.643 vdpa/mlx5: not in enabled drivers build config 00:01:37.643 vdpa/sfc: not in enabled drivers build config 00:01:37.643 event/cnxk: not in enabled drivers build config 00:01:37.643 event/dlb2: not in enabled drivers build config 00:01:37.643 event/dpaa: not in enabled drivers build config 00:01:37.643 event/dpaa2: not in enabled drivers build config 00:01:37.643 event/dsw: not in enabled drivers build config 00:01:37.643 event/opdl: not in enabled drivers build config 00:01:37.643 event/skeleton: not in enabled drivers build config 00:01:37.643 event/sw: not in enabled drivers build config 00:01:37.643 event/octeontx: not in enabled drivers build config 00:01:37.643 baseband/acc: not in enabled drivers build config 00:01:37.643 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:37.643 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:37.643 baseband/la12xx: not in enabled drivers build config 00:01:37.643 baseband/null: not in enabled drivers build config 00:01:37.643 baseband/turbo_sw: not in enabled drivers build config 00:01:37.643 gpu/cuda: not in enabled drivers build config 00:01:37.643 00:01:37.643 00:01:37.643 Build targets in project: 316 00:01:37.643 00:01:37.643 DPDK 22.11.4 00:01:37.643 00:01:37.643 User defined options 00:01:37.643 libdir : lib 00:01:37.643 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:37.643 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:37.643 c_link_args : 00:01:37.643 enable_docs : false 00:01:37.643 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:37.643 enable_kmods : false 00:01:37.643 machine : native 00:01:37.643 tests : false 00:01:37.643 00:01:37.643 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.643 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:37.643 00:47:26 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:37.643 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:37.643 [1/745] Generating lib/rte_telemetry_def with a custom command 00:01:37.643 [2/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:37.643 [3/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:37.643 [4/745] Generating lib/rte_kvargs_def with a custom command 00:01:37.643 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:37.643 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:37.643 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:37.643 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:37.643 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:37.643 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:37.643 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:37.644 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:37.905 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:37.905 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:37.905 [15/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:37.905 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:37.905 [17/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:37.905 [18/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:37.905 [19/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:37.905 [20/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:37.905 [21/745] Linking static target lib/librte_kvargs.a 00:01:37.905 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:37.905 [23/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:37.905 [24/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:37.905 [25/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:37.905 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:37.905 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:37.905 [28/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:37.905 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:37.905 [30/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:37.905 [31/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:37.905 [32/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:37.905 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:37.905 [34/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:37.905 [35/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:37.905 [36/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:37.905 [37/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:37.905 [38/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:37.905 [39/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:37.905 [40/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:37.905 [41/745] Generating lib/rte_eal_mingw with a custom command 00:01:37.905 [42/745] Generating lib/rte_eal_def with a custom command 00:01:37.905 [43/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:37.905 [44/745] Generating lib/rte_ring_def with a custom command 00:01:37.905 [45/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:37.905 [46/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:37.905 [47/745] Generating lib/rte_ring_mingw with a custom command 00:01:37.905 [48/745] Generating lib/rte_rcu_def with a custom command 00:01:37.905 [49/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:37.905 [50/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:37.905 [51/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:37.905 [52/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:37.905 [53/745] Generating lib/rte_rcu_mingw with a custom command 00:01:37.905 [54/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:37.905 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:37.905 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:37.905 [57/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:37.905 [58/745] Generating lib/rte_mempool_def with a custom command 00:01:37.905 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:37.905 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:37.905 [61/745] Generating lib/rte_mempool_mingw with a custom command 00:01:37.905 [62/745] Generating lib/rte_mbuf_def with a custom command 00:01:37.905 [63/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:37.905 [64/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:38.169 [65/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:38.169 [66/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:38.169 [67/745] Generating lib/rte_net_mingw with a custom command 00:01:38.169 [68/745] Generating lib/rte_net_def with a custom command 00:01:38.169 [69/745] Generating lib/rte_meter_mingw with a custom command 00:01:38.169 [70/745] Generating lib/rte_meter_def with a custom command 00:01:38.169 [71/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:38.169 [72/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:38.169 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:38.169 [74/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:38.169 [75/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:38.169 [76/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:38.169 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:38.169 [78/745] Generating lib/rte_ethdev_def with a custom command 00:01:38.169 [79/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.169 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:38.169 [81/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:38.169 [82/745] Linking static target lib/librte_ring.a 00:01:38.169 [83/745] Linking target lib/librte_kvargs.so.23.0 00:01:38.169 [84/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:38.433 [85/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:38.433 [86/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:38.433 [87/745] Generating lib/rte_pci_def with a custom command 00:01:38.433 [88/745] Linking static target lib/librte_meter.a 00:01:38.433 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:38.433 [90/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:38.433 [91/745] Generating lib/rte_pci_mingw with a custom command 00:01:38.433 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:38.433 [93/745] Linking static target lib/librte_pci.a 00:01:38.433 [94/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:38.433 [95/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:38.433 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:38.433 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:38.433 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:38.697 [99/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:38.697 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:38.697 [101/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.697 [102/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.697 [103/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:38.697 [104/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:38.697 [105/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:38.697 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:38.697 [107/745] Linking static target lib/librte_telemetry.a 00:01:38.697 [108/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:38.697 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:38.697 [110/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.697 [111/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:38.697 [112/745] Generating lib/rte_cmdline_def with a custom command 00:01:38.697 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:38.697 [114/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:38.955 [115/745] Generating lib/rte_metrics_mingw with a custom command 00:01:38.955 [116/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:38.955 [117/745] Generating lib/rte_metrics_def with a custom command 00:01:38.955 [118/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:38.955 [119/745] Generating lib/rte_timer_def with a custom command 00:01:38.955 [120/745] Generating lib/rte_hash_mingw with a custom command 00:01:38.955 [121/745] Generating lib/rte_hash_def with a custom command 00:01:38.955 [122/745] Generating lib/rte_timer_mingw with a custom command 00:01:38.955 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:38.955 [124/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:38.956 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:39.216 [126/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:39.216 [127/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:39.216 [128/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:39.216 [129/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:39.216 [130/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:39.216 [131/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:39.216 [132/745] Generating lib/rte_acl_def with a custom command 00:01:39.216 [133/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:39.216 [134/745] Generating lib/rte_acl_mingw with a custom command 00:01:39.216 [135/745] Generating lib/rte_bbdev_def with a custom command 00:01:39.216 [136/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:39.216 [137/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:39.216 [138/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.216 [139/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:39.216 [140/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:39.216 [141/745] Generating lib/rte_bitratestats_def with a custom command 00:01:39.216 [142/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:39.216 [143/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:39.216 [144/745] Linking target lib/librte_telemetry.so.23.0 00:01:39.478 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:39.478 [146/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:39.478 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:39.478 [148/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:39.478 [149/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:39.478 [150/745] Generating lib/rte_bpf_def with a custom command 00:01:39.478 [151/745] Generating lib/rte_bpf_mingw with a custom command 00:01:39.478 [152/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:39.478 [153/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:39.478 [154/745] Generating lib/rte_cfgfile_def with a custom command 00:01:39.478 [155/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:39.478 [156/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:39.478 [157/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:39.478 [158/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:39.738 [159/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:39.738 [160/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:39.738 [161/745] Generating lib/rte_compressdev_def with a custom command 00:01:39.738 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:39.738 [163/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:39.738 [164/745] Generating lib/rte_cryptodev_def with a custom command 00:01:39.738 [165/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:39.738 [166/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:39.738 [167/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:39.738 [168/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:39.738 [169/745] Generating lib/rte_distributor_def with a custom command 00:01:39.738 [170/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:39.738 [171/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:39.738 [172/745] Linking static target lib/librte_rcu.a 00:01:39.738 [173/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:39.738 [174/745] Linking static target lib/librte_cmdline.a 00:01:39.738 [175/745] Generating lib/rte_distributor_mingw with a custom command 00:01:39.738 [176/745] Linking static target lib/librte_timer.a 00:01:39.738 [177/745] Generating lib/rte_efd_mingw with a custom command 00:01:39.738 [178/745] Generating lib/rte_efd_def with a custom command 00:01:39.738 [179/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:39.738 [180/745] Linking static target lib/librte_net.a 00:01:40.001 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:40.001 [182/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:40.001 [183/745] Linking static target lib/librte_cfgfile.a 00:01:40.001 [184/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:40.001 [185/745] Linking static target lib/librte_metrics.a 00:01:40.001 [186/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:40.001 [187/745] Linking static target lib/librte_mempool.a 00:01:40.264 [188/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:40.264 [189/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.264 [190/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.264 [191/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:40.264 [192/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.264 [193/745] Generating lib/rte_eventdev_def with a custom command 00:01:40.264 [194/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:40.264 [195/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:40.533 [196/745] Linking static target lib/librte_eal.a 00:01:40.533 [197/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:40.533 [198/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:40.533 [199/745] Generating lib/rte_gpudev_def with a custom command 00:01:40.533 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:40.533 [201/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:40.533 [202/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:40.533 [203/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.533 [204/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:40.533 [205/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:40.533 [206/745] Linking static target lib/librte_bitratestats.a 00:01:40.533 [207/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:40.533 [208/745] Generating lib/rte_gro_def with a custom command 00:01:40.533 [209/745] Generating lib/rte_gro_mingw with a custom command 00:01:40.533 [210/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.793 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:40.793 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:40.793 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:40.793 [214/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:40.793 [215/745] Generating lib/rte_gso_def with a custom command 00:01:41.054 [216/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:41.054 [217/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:41.054 [218/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.054 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:41.054 [220/745] Generating lib/rte_gso_mingw with a custom command 00:01:41.054 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:41.054 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:41.054 [223/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:41.054 [224/745] Generating lib/rte_ip_frag_def with a custom command 00:01:41.054 [225/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:41.317 [226/745] Linking static target lib/librte_bbdev.a 00:01:41.317 [227/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:41.317 [228/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:41.317 [229/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.317 [230/745] Generating lib/rte_jobstats_def with a custom command 00:01:41.317 [231/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:41.317 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:41.317 [233/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:41.317 [234/745] Generating lib/rte_latencystats_def with a custom command 00:01:41.317 [235/745] Linking static target lib/librte_compressdev.a 00:01:41.317 [236/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:41.317 [237/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.317 [238/745] Generating lib/rte_lpm_def with a custom command 00:01:41.317 [239/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:41.317 [240/745] Generating lib/rte_lpm_mingw with a custom command 00:01:41.581 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:41.581 [242/745] Linking static target lib/librte_jobstats.a 00:01:41.581 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:41.581 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:41.841 [245/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:41.841 [246/745] Linking static target lib/librte_distributor.a 00:01:41.841 [247/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:41.841 [248/745] Generating lib/rte_member_def with a custom command 00:01:41.841 [249/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:41.841 [250/745] Generating lib/rte_member_mingw with a custom command 00:01:41.841 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:41.841 [252/745] Generating lib/rte_pcapng_def with a custom command 00:01:41.841 [253/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:41.841 [254/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.111 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:42.111 [256/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.111 [257/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:42.111 [258/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:42.111 [259/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:42.111 [260/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:42.111 [261/745] Linking static target lib/librte_bpf.a 00:01:42.111 [262/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:42.111 [263/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:42.111 [264/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:42.111 [265/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:42.111 [266/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:42.111 [267/745] Linking static target lib/librte_gpudev.a 00:01:42.111 [268/745] Generating lib/rte_power_mingw with a custom command 00:01:42.111 [269/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.111 [270/745] Generating lib/rte_power_def with a custom command 00:01:42.111 [271/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:42.111 [272/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:42.370 [273/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:42.370 [274/745] Linking static target lib/librte_gro.a 00:01:42.370 [275/745] Generating lib/rte_rawdev_def with a custom command 00:01:42.370 [276/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:42.370 [277/745] Generating lib/rte_regexdev_def with a custom command 00:01:42.370 [278/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:42.370 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:42.370 [280/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:42.370 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:42.370 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:42.370 [283/745] Generating lib/rte_rib_def with a custom command 00:01:42.370 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:42.370 [285/745] Generating lib/rte_reorder_def with a custom command 00:01:42.631 [286/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:42.631 [287/745] Generating lib/rte_reorder_mingw with a custom command 00:01:42.631 [288/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:42.631 [289/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.631 [290/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.631 [291/745] Generating lib/rte_sched_def with a custom command 00:01:42.631 [292/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:42.631 [293/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:42.631 [294/745] Generating lib/rte_sched_mingw with a custom command 00:01:42.631 [295/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.631 [296/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:42.631 [297/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:42.897 [298/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:42.897 [299/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:42.897 [300/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:42.897 [301/745] Generating lib/rte_security_def with a custom command 00:01:42.897 [302/745] Generating lib/rte_security_mingw with a custom command 00:01:42.897 [303/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:42.897 [304/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:42.897 [305/745] Generating lib/rte_stack_def with a custom command 00:01:42.897 [306/745] Generating lib/rte_stack_mingw with a custom command 00:01:42.897 [307/745] Linking static target lib/librte_latencystats.a 00:01:42.897 [308/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:42.897 [309/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:42.897 [310/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:42.897 [311/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:42.897 [312/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:42.897 [313/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:42.897 [314/745] Linking static target lib/librte_rawdev.a 00:01:42.897 [315/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:42.897 [316/745] Generating lib/rte_vhost_def with a custom command 00:01:42.897 [317/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:42.897 [318/745] Generating lib/rte_vhost_mingw with a custom command 00:01:42.897 [319/745] Linking static target lib/librte_stack.a 00:01:42.897 [320/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:43.161 [321/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:43.161 [322/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:43.161 [323/745] Linking static target lib/librte_dmadev.a 00:01:43.161 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:43.161 [325/745] Linking static target lib/librte_ip_frag.a 00:01:43.161 [326/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.161 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:43.161 [328/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:43.161 [329/745] Generating lib/rte_ipsec_def with a custom command 00:01:43.161 [330/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:43.423 [331/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.423 [332/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:43.423 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:43.423 [334/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.690 [335/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:43.690 [336/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:43.690 [337/745] Linking static target lib/librte_gso.a 00:01:43.690 [338/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.690 [339/745] Generating lib/rte_fib_def with a custom command 00:01:43.690 [340/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.690 [341/745] Generating lib/rte_fib_mingw with a custom command 00:01:43.690 [342/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:43.690 [343/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:43.690 [344/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:43.690 [345/745] Linking static target lib/librte_regexdev.a 00:01:43.951 [346/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.951 [347/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.951 [348/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:43.951 [349/745] Linking static target lib/librte_efd.a 00:01:43.951 [350/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:44.212 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:44.212 [352/745] Linking static target lib/librte_pcapng.a 00:01:44.212 [353/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:44.212 [354/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:44.212 [355/745] Linking static target lib/librte_lpm.a 00:01:44.212 [356/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:44.212 [357/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:44.480 [358/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:44.480 [359/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:44.480 [360/745] Linking static target lib/librte_reorder.a 00:01:44.480 [361/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.480 [362/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:44.480 [363/745] Generating lib/rte_port_def with a custom command 00:01:44.480 [364/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:44.480 [365/745] Linking static target lib/acl/libavx2_tmp.a 00:01:44.480 [366/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:44.480 [367/745] Generating lib/rte_port_mingw with a custom command 00:01:44.480 [368/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:44.480 [369/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:44.480 [370/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:44.744 [371/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:44.744 [372/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:44.744 [373/745] Generating lib/rte_pdump_def with a custom command 00:01:44.744 [374/745] Generating lib/rte_pdump_mingw with a custom command 00:01:44.744 [375/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:44.744 [376/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:44.744 [377/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:44.744 [378/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.744 [379/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:44.744 [380/745] Linking static target lib/librte_hash.a 00:01:44.744 [381/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:44.744 [382/745] Linking static target lib/librte_security.a 00:01:44.744 [383/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.744 [384/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:44.744 [385/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:44.744 [386/745] Linking static target lib/librte_power.a 00:01:44.744 [387/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.007 [388/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:45.007 [389/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.007 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:45.007 [391/745] Linking static target lib/librte_rib.a 00:01:45.270 [392/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:45.270 [393/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:45.270 [394/745] Linking static target lib/acl/libavx512_tmp.a 00:01:45.270 [395/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:45.270 [396/745] Linking static target lib/librte_acl.a 00:01:45.270 [397/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:45.270 [398/745] Generating lib/rte_table_def with a custom command 00:01:45.270 [399/745] Generating lib/rte_table_mingw with a custom command 00:01:45.537 [400/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.537 [401/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:45.537 [402/745] Linking static target lib/librte_ethdev.a 00:01:45.537 [403/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:45.811 [404/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.811 [405/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.811 [406/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:45.811 [407/745] Linking static target lib/librte_mbuf.a 00:01:45.811 [408/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.811 [409/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:45.811 [410/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:45.811 [411/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:45.811 [412/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:45.811 [413/745] Generating lib/rte_pipeline_def with a custom command 00:01:45.811 [414/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:46.070 [415/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:46.070 [416/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:46.070 [417/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:46.070 [418/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:46.070 [419/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:46.070 [420/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:46.070 [421/745] Generating lib/rte_graph_def with a custom command 00:01:46.070 [422/745] Generating lib/rte_graph_mingw with a custom command 00:01:46.070 [423/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:46.070 [424/745] Linking static target lib/librte_fib.a 00:01:46.334 [425/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:46.334 [426/745] Linking static target lib/librte_member.a 00:01:46.334 [427/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:46.334 [428/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:46.334 [429/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:46.334 [430/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.334 [431/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:46.334 [432/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:46.334 [433/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:46.334 [434/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:46.334 [435/745] Linking static target lib/librte_eventdev.a 00:01:46.600 [436/745] Generating lib/rte_node_def with a custom command 00:01:46.600 [437/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:46.600 [438/745] Generating lib/rte_node_mingw with a custom command 00:01:46.600 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:46.600 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.600 [441/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.600 [442/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:46.600 [443/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:46.600 [444/745] Linking static target lib/librte_sched.a 00:01:46.600 [445/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:46.600 [446/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:46.865 [447/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.865 [448/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:46.865 [449/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:46.865 [450/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:46.865 [451/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:46.865 [452/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:46.865 [453/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:46.865 [454/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:46.865 [455/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:46.865 [456/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:46.865 [457/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:46.865 [458/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:46.865 [459/745] Linking static target lib/librte_cryptodev.a 00:01:46.865 [460/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:47.129 [461/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:47.129 [462/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:47.129 [463/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:47.129 [464/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:47.129 [465/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:47.129 [466/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:47.129 [467/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:47.129 [468/745] Linking static target lib/librte_pdump.a 00:01:47.129 [469/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:47.129 [470/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:47.129 [471/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:47.393 [472/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:47.393 [473/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:47.393 [474/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.393 [475/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:47.393 [476/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:47.393 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:47.393 [478/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:47.393 [479/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:47.660 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:47.660 [481/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:47.660 [482/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:47.660 [483/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:47.660 [484/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.660 [485/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.660 [486/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:47.660 [487/745] Linking static target drivers/librte_bus_vdev.a 00:01:47.660 [488/745] Linking static target lib/librte_table.a 00:01:47.660 [489/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.925 [490/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:47.925 [491/745] Linking static target lib/librte_ipsec.a 00:01:47.925 [492/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:47.925 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:47.925 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:48.193 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.193 [496/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:48.193 [497/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:48.193 [498/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:48.193 [499/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:48.193 [500/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:48.193 [501/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:48.458 [502/745] Linking static target lib/librte_graph.a 00:01:48.458 [503/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:48.458 [504/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:48.458 [505/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.458 [506/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:48.458 [507/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:48.458 [508/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:48.458 [509/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.458 [510/745] Linking static target drivers/librte_bus_pci.a 00:01:48.722 [511/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:48.722 [512/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.722 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:48.722 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.988 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:48.988 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.252 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:49.252 [518/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:49.252 [519/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:49.252 [520/745] Linking static target lib/librte_port.a 00:01:49.252 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:49.517 [522/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.517 [523/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:49.517 [524/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:49.517 [525/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:49.517 [526/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:49.779 [527/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.779 [528/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:49.779 [529/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:49.779 [530/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:49.780 [531/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:49.780 [532/745] Linking static target drivers/librte_mempool_ring.a 00:01:49.780 [533/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.079 [534/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:50.080 [535/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:50.080 [536/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:50.080 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:50.080 [538/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:50.349 [539/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:50.349 [540/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.349 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.614 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:50.614 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:50.614 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:50.614 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:50.876 [546/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:50.876 [547/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:50.876 [548/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:50.876 [549/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:50.876 [550/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:51.141 [551/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:51.408 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:51.408 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:51.408 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:51.408 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:51.408 [556/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:51.669 [557/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:51.669 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:51.932 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:51.932 [560/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:51.932 [561/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:52.195 [562/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:52.195 [563/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:52.195 [564/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:52.195 [565/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:52.454 [566/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:52.454 [567/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:52.454 [568/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:52.454 [569/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:52.454 [570/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:52.454 [571/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:52.454 [572/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:52.719 [573/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:52.720 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:52.720 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:52.720 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:52.986 [577/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:52.986 [578/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:52.986 [579/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:52.986 [580/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:52.986 [581/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.986 [582/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:53.247 [583/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:53.247 [584/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:53.247 [585/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:53.247 [586/745] Linking target lib/librte_eal.so.23.0 00:01:53.517 [587/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:53.517 [588/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.517 [589/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:53.517 [590/745] Linking target lib/librte_ring.so.23.0 00:01:53.517 [591/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:53.784 [592/745] Linking target lib/librte_meter.so.23.0 00:01:53.784 [593/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:53.784 [594/745] Linking target lib/librte_rcu.so.23.0 00:01:54.045 [595/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:54.045 [596/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:54.045 [597/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:54.045 [598/745] Linking target lib/librte_mempool.so.23.0 00:01:54.045 [599/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:54.045 [600/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:54.045 [601/745] Linking target lib/librte_pci.so.23.0 00:01:54.045 [602/745] Linking target lib/librte_timer.so.23.0 00:01:54.045 [603/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:54.045 [604/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:54.045 [605/745] Linking target lib/librte_cfgfile.so.23.0 00:01:54.045 [606/745] Linking target lib/librte_acl.so.23.0 00:01:54.045 [607/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:54.045 [608/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:54.307 [609/745] Linking target lib/librte_jobstats.so.23.0 00:01:54.307 [610/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:54.307 [611/745] Linking target lib/librte_dmadev.so.23.0 00:01:54.307 [612/745] Linking target lib/librte_rawdev.so.23.0 00:01:54.307 [613/745] Linking target lib/librte_stack.so.23.0 00:01:54.307 [614/745] Linking target lib/librte_graph.so.23.0 00:01:54.307 [615/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:54.307 [616/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:54.307 [617/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:54.307 [618/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:54.307 [619/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:54.307 [620/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:54.307 [621/745] Linking target lib/librte_rib.so.23.0 00:01:54.307 [622/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:54.307 [623/745] Linking target lib/librte_mbuf.so.23.0 00:01:54.307 [624/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:54.307 [625/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:54.307 [626/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:54.565 [627/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:54.565 [628/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:54.565 [629/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:54.565 [630/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:54.565 [631/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:54.565 [632/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:54.565 [633/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:54.565 [634/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:54.565 [635/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:54.565 [636/745] Linking target lib/librte_fib.so.23.0 00:01:54.565 [637/745] Linking target lib/librte_bbdev.so.23.0 00:01:54.565 [638/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:54.565 [639/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:54.565 [640/745] Linking target lib/librte_gpudev.so.23.0 00:01:54.565 [641/745] Linking target lib/librte_compressdev.so.23.0 00:01:54.565 [642/745] Linking target lib/librte_distributor.so.23.0 00:01:54.565 [643/745] Linking target lib/librte_reorder.so.23.0 00:01:54.565 [644/745] Linking target lib/librte_regexdev.so.23.0 00:01:54.565 [645/745] Linking target lib/librte_net.so.23.0 00:01:54.565 [646/745] Linking target lib/librte_sched.so.23.0 00:01:54.565 [647/745] Linking target lib/librte_cryptodev.so.23.0 00:01:54.565 [648/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:54.565 [649/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:54.823 [650/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:54.823 [651/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:54.823 [652/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:54.823 [653/745] Linking target lib/librte_hash.so.23.0 00:01:54.823 [654/745] Linking target lib/librte_cmdline.so.23.0 00:01:54.823 [655/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:54.823 [656/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:54.823 [657/745] Linking target lib/librte_security.so.23.0 00:01:54.823 [658/745] Linking target lib/librte_ethdev.so.23.0 00:01:54.823 [659/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:54.823 [660/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:54.823 [661/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:55.082 [662/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:55.082 [663/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:55.082 [664/745] Linking target lib/librte_efd.so.23.0 00:01:55.082 [665/745] Linking target lib/librte_lpm.so.23.0 00:01:55.082 [666/745] Linking target lib/librte_member.so.23.0 00:01:55.082 [667/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:55.082 [668/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:55.082 [669/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:55.082 [670/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:55.082 [671/745] Linking target lib/librte_gro.so.23.0 00:01:55.082 [672/745] Linking target lib/librte_ip_frag.so.23.0 00:01:55.082 [673/745] Linking target lib/librte_power.so.23.0 00:01:55.082 [674/745] Linking target lib/librte_ipsec.so.23.0 00:01:55.082 [675/745] Linking target lib/librte_pcapng.so.23.0 00:01:55.082 [676/745] Linking target lib/librte_metrics.so.23.0 00:01:55.082 [677/745] Linking target lib/librte_gso.so.23.0 00:01:55.082 [678/745] Linking target lib/librte_bpf.so.23.0 00:01:55.082 [679/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:55.082 [680/745] Linking target lib/librte_eventdev.so.23.0 00:01:55.340 [681/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:55.340 [682/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:55.340 [683/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:55.340 [684/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:55.340 [685/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:55.340 [686/745] Linking target lib/librte_latencystats.so.23.0 00:01:55.340 [687/745] Linking target lib/librte_bitratestats.so.23.0 00:01:55.340 [688/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:55.340 [689/745] Linking target lib/librte_pdump.so.23.0 00:01:55.340 [690/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:55.340 [691/745] Linking target lib/librte_port.so.23.0 00:01:55.340 [692/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:55.599 [693/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:55.599 [694/745] Linking target lib/librte_table.so.23.0 00:01:55.599 [695/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:55.857 [696/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:55.857 [697/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:55.857 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:56.115 [699/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:56.373 [700/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:56.632 [701/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:56.632 [702/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:56.632 [703/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:56.632 [704/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:56.632 [705/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:57.198 [706/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:57.198 [707/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:57.198 [708/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:57.198 [709/745] Linking static target drivers/librte_net_i40e.a 00:01:57.456 [710/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:57.715 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.715 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:57.973 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:58.231 [714/745] Linking static target lib/librte_node.a 00:01:58.231 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.489 [716/745] Linking target lib/librte_node.so.23.0 00:01:58.489 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:59.055 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:59.620 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:07.732 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:46.498 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:46.498 [722/745] Linking static target lib/librte_vhost.a 00:02:46.498 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.498 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:58.707 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:58.707 [726/745] Linking static target lib/librte_pipeline.a 00:02:58.707 [727/745] Linking target app/dpdk-test-acl 00:02:58.707 [728/745] Linking target app/dpdk-proc-info 00:02:58.965 [729/745] Linking target app/dpdk-pdump 00:02:58.965 [730/745] Linking target app/dpdk-test-fib 00:02:58.965 [731/745] Linking target app/dpdk-test-regex 00:02:58.965 [732/745] Linking target app/dpdk-test-gpudev 00:02:58.965 [733/745] Linking target app/dpdk-test-cmdline 00:02:58.965 [734/745] Linking target app/dpdk-dumpcap 00:02:58.966 [735/745] Linking target app/dpdk-test-sad 00:02:58.966 [736/745] Linking target app/dpdk-test-flow-perf 00:02:58.966 [737/745] Linking target app/dpdk-test-pipeline 00:02:58.966 [738/745] Linking target app/dpdk-test-security-perf 00:02:58.966 [739/745] Linking target app/dpdk-test-compress-perf 00:02:58.966 [740/745] Linking target app/dpdk-test-eventdev 00:02:58.966 [741/745] Linking target app/dpdk-test-bbdev 00:02:58.966 [742/745] Linking target app/dpdk-test-crypto-perf 00:02:58.966 [743/745] Linking target app/dpdk-testpmd 00:03:00.876 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.876 [745/745] Linking target lib/librte_pipeline.so.23.0 00:03:00.876 00:48:49 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:03:00.876 00:48:49 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:00.876 00:48:49 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:00.876 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:00.876 [0/1] Installing files. 00:03:01.140 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:01.140 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.141 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.142 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.143 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.144 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:01.145 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:01.145 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.145 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.146 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.146 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.146 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.146 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.146 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.146 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.146 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.146 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.146 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.146 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.146 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.146 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.719 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.719 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.719 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.719 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.719 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.719 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.719 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.719 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.719 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.719 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:01.719 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.719 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:01.719 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.719 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:01.719 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.719 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:01.719 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.719 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.719 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.719 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.719 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.719 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.719 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.719 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.719 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.719 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.719 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.719 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.719 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.719 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.719 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.719 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.719 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.719 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.719 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.719 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.719 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:01.719 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:01.719 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:01.719 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:01.719 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:01.719 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:01.719 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:01.719 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:01.719 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:01.719 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:01.719 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:01.719 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:01.719 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.719 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.719 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.720 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.721 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.722 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:01.723 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:01.723 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:01.723 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:01.723 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:01.723 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:01.723 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:01.723 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:01.723 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:01.723 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:01.723 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:01.723 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:01.723 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:01.723 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:01.723 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:01.723 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:01.723 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:01.723 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:01.723 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:01.723 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:01.723 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:01.723 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:01.723 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:01.723 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:01.723 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:01.723 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:01.723 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:01.723 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:01.723 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:01.723 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:01.723 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:01.723 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:01.723 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:01.723 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:01.723 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:01.723 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:01.723 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:01.723 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:01.723 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:01.723 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:01.723 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:01.723 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:01.723 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:01.723 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:01.723 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:01.723 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:01.723 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:01.723 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:01.723 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:01.723 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:01.723 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:01.723 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:01.723 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:01.723 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:01.723 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:01.723 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:01.724 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:01.724 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:01.724 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:01.724 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:01.724 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:01.724 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:01.724 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:01.724 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:01.724 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:01.724 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:01.724 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:01.724 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:01.724 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:01.724 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:01.724 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:01.724 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:01.724 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:01.724 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:01.724 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:01.724 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:01.724 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:01.724 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:01.724 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:01.724 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:01.724 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:01.724 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:01.724 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:01.724 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:01.724 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:01.724 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:01.724 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:01.724 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:01.724 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:01.724 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:01.724 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:01.724 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:01.724 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:01.724 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:01.724 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:01.724 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:01.724 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:01.724 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:01.724 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:01.724 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:01.724 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:01.724 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:01.724 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:01.724 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:01.724 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:01.724 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:01.724 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:01.724 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:01.724 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:01.724 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:01.724 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:01.724 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:01.724 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:01.724 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:01.724 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:01.724 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:01.724 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:01.724 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:01.724 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:01.724 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:01.724 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:01.724 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:01.724 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:01.724 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:01.724 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:01.724 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:01.724 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:01.724 00:48:50 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:03:01.724 00:48:50 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:01.724 00:03:01.724 real 1m29.330s 00:03:01.724 user 14m33.722s 00:03:01.724 sys 1m48.759s 00:03:01.724 00:48:50 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:01.724 00:48:50 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:01.724 ************************************ 00:03:01.724 END TEST build_native_dpdk 00:03:01.724 ************************************ 00:03:01.724 00:48:51 -- common/autotest_common.sh@1142 -- $ return 0 00:03:01.724 00:48:51 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:01.724 00:48:51 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:01.724 00:48:51 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:01.724 00:48:51 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:01.724 00:48:51 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:01.724 00:48:51 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:01.724 00:48:51 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:01.724 00:48:51 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:01.724 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:01.985 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:01.985 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:01.985 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:02.245 Using 'verbs' RDMA provider 00:03:12.815 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:20.930 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:21.190 Creating mk/config.mk...done. 00:03:21.190 Creating mk/cc.flags.mk...done. 00:03:21.190 Type 'make' to build. 00:03:21.190 00:49:10 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:21.190 00:49:10 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:21.190 00:49:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:21.190 00:49:10 -- common/autotest_common.sh@10 -- $ set +x 00:03:21.190 ************************************ 00:03:21.190 START TEST make 00:03:21.190 ************************************ 00:03:21.190 00:49:10 make -- common/autotest_common.sh@1123 -- $ make -j48 00:03:21.448 make[1]: Nothing to be done for 'all'. 00:03:23.369 The Meson build system 00:03:23.369 Version: 1.3.1 00:03:23.369 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:23.369 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:23.369 Build type: native build 00:03:23.369 Project name: libvfio-user 00:03:23.369 Project version: 0.0.1 00:03:23.369 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:23.369 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:23.369 Host machine cpu family: x86_64 00:03:23.369 Host machine cpu: x86_64 00:03:23.369 Run-time dependency threads found: YES 00:03:23.369 Library dl found: YES 00:03:23.369 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:23.369 Run-time dependency json-c found: YES 0.17 00:03:23.369 Run-time dependency cmocka found: YES 1.1.7 00:03:23.369 Program pytest-3 found: NO 00:03:23.369 Program flake8 found: NO 00:03:23.369 Program misspell-fixer found: NO 00:03:23.369 Program restructuredtext-lint found: NO 00:03:23.369 Program valgrind found: YES (/usr/bin/valgrind) 00:03:23.369 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:23.369 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:23.369 Compiler for C supports arguments -Wwrite-strings: YES 00:03:23.369 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:23.369 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:23.369 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:23.369 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:23.369 Build targets in project: 8 00:03:23.369 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:23.369 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:23.369 00:03:23.369 libvfio-user 0.0.1 00:03:23.369 00:03:23.369 User defined options 00:03:23.369 buildtype : debug 00:03:23.369 default_library: shared 00:03:23.369 libdir : /usr/local/lib 00:03:23.369 00:03:23.369 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:23.942 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:23.942 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:23.942 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:24.204 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:24.204 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:24.204 [5/37] Compiling C object samples/null.p/null.c.o 00:03:24.204 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:24.204 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:24.204 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:24.204 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:24.204 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:24.204 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:24.204 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:24.204 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:24.204 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:24.204 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:24.204 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:24.204 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:24.204 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:24.204 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:24.204 [20/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:24.204 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:24.204 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:24.204 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:24.204 [24/37] Compiling C object samples/server.p/server.c.o 00:03:24.467 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:24.467 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:24.467 [27/37] Compiling C object samples/client.p/client.c.o 00:03:24.467 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:03:24.467 [29/37] Linking target samples/client 00:03:24.467 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:24.467 [31/37] Linking target test/unit_tests 00:03:24.731 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:24.731 [33/37] Linking target samples/null 00:03:24.731 [34/37] Linking target samples/lspci 00:03:24.731 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:24.731 [36/37] Linking target samples/gpio-pci-idio-16 00:03:24.731 [37/37] Linking target samples/server 00:03:24.731 INFO: autodetecting backend as ninja 00:03:24.731 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:24.731 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:25.676 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:25.676 ninja: no work to do. 00:03:37.874 CC lib/ut/ut.o 00:03:37.874 CC lib/log/log.o 00:03:37.874 CC lib/log/log_flags.o 00:03:37.874 CC lib/log/log_deprecated.o 00:03:37.874 CC lib/ut_mock/mock.o 00:03:37.874 LIB libspdk_ut_mock.a 00:03:37.874 LIB libspdk_ut.a 00:03:37.874 LIB libspdk_log.a 00:03:37.874 SO libspdk_ut_mock.so.6.0 00:03:37.874 SO libspdk_ut.so.2.0 00:03:37.874 SO libspdk_log.so.7.0 00:03:37.875 SYMLINK libspdk_ut_mock.so 00:03:37.875 SYMLINK libspdk_ut.so 00:03:37.875 SYMLINK libspdk_log.so 00:03:37.875 CXX lib/trace_parser/trace.o 00:03:37.875 CC lib/ioat/ioat.o 00:03:37.875 CC lib/dma/dma.o 00:03:37.875 CC lib/util/base64.o 00:03:37.875 CC lib/util/bit_array.o 00:03:37.875 CC lib/util/cpuset.o 00:03:37.875 CC lib/util/crc16.o 00:03:37.875 CC lib/util/crc32.o 00:03:37.875 CC lib/util/crc32c.o 00:03:37.875 CC lib/util/crc32_ieee.o 00:03:37.875 CC lib/util/crc64.o 00:03:37.875 CC lib/util/dif.o 00:03:37.875 CC lib/util/fd.o 00:03:37.875 CC lib/util/file.o 00:03:37.875 CC lib/util/hexlify.o 00:03:37.875 CC lib/util/iov.o 00:03:37.875 CC lib/util/math.o 00:03:37.875 CC lib/util/pipe.o 00:03:37.875 CC lib/util/strerror_tls.o 00:03:37.875 CC lib/util/string.o 00:03:37.875 CC lib/util/uuid.o 00:03:37.875 CC lib/util/fd_group.o 00:03:37.875 CC lib/util/xor.o 00:03:37.875 CC lib/util/zipf.o 00:03:37.875 CC lib/vfio_user/host/vfio_user_pci.o 00:03:37.875 CC lib/vfio_user/host/vfio_user.o 00:03:37.875 LIB libspdk_dma.a 00:03:37.875 SO libspdk_dma.so.4.0 00:03:37.875 SYMLINK libspdk_dma.so 00:03:37.875 LIB libspdk_ioat.a 00:03:37.875 SO libspdk_ioat.so.7.0 00:03:37.875 LIB libspdk_vfio_user.a 00:03:37.875 SYMLINK libspdk_ioat.so 00:03:37.875 SO libspdk_vfio_user.so.5.0 00:03:37.875 SYMLINK libspdk_vfio_user.so 00:03:37.875 LIB libspdk_util.a 00:03:37.875 SO libspdk_util.so.9.1 00:03:38.134 SYMLINK libspdk_util.so 00:03:38.134 CC lib/conf/conf.o 00:03:38.134 CC lib/rdma_utils/rdma_utils.o 00:03:38.134 CC lib/rdma_provider/common.o 00:03:38.134 CC lib/vmd/vmd.o 00:03:38.134 CC lib/json/json_parse.o 00:03:38.134 CC lib/vmd/led.o 00:03:38.134 CC lib/json/json_util.o 00:03:38.134 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:38.134 CC lib/json/json_write.o 00:03:38.134 CC lib/idxd/idxd.o 00:03:38.134 CC lib/env_dpdk/env.o 00:03:38.134 CC lib/env_dpdk/memory.o 00:03:38.134 CC lib/idxd/idxd_user.o 00:03:38.134 CC lib/env_dpdk/pci.o 00:03:38.134 CC lib/idxd/idxd_kernel.o 00:03:38.134 CC lib/env_dpdk/init.o 00:03:38.134 CC lib/env_dpdk/threads.o 00:03:38.134 CC lib/env_dpdk/pci_ioat.o 00:03:38.134 CC lib/env_dpdk/pci_virtio.o 00:03:38.134 CC lib/env_dpdk/pci_vmd.o 00:03:38.134 CC lib/env_dpdk/pci_idxd.o 00:03:38.134 CC lib/env_dpdk/pci_event.o 00:03:38.134 CC lib/env_dpdk/sigbus_handler.o 00:03:38.134 CC lib/env_dpdk/pci_dpdk.o 00:03:38.134 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:38.134 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:38.393 LIB libspdk_trace_parser.a 00:03:38.393 SO libspdk_trace_parser.so.5.0 00:03:38.393 LIB libspdk_rdma_provider.a 00:03:38.393 SO libspdk_rdma_provider.so.6.0 00:03:38.393 LIB libspdk_conf.a 00:03:38.651 SO libspdk_conf.so.6.0 00:03:38.651 LIB libspdk_rdma_utils.a 00:03:38.651 SYMLINK libspdk_rdma_provider.so 00:03:38.651 SYMLINK libspdk_trace_parser.so 00:03:38.651 LIB libspdk_json.a 00:03:38.651 SO libspdk_rdma_utils.so.1.0 00:03:38.651 SYMLINK libspdk_conf.so 00:03:38.651 SO libspdk_json.so.6.0 00:03:38.651 SYMLINK libspdk_rdma_utils.so 00:03:38.651 SYMLINK libspdk_json.so 00:03:38.910 CC lib/jsonrpc/jsonrpc_server.o 00:03:38.910 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:38.910 CC lib/jsonrpc/jsonrpc_client.o 00:03:38.910 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:38.910 LIB libspdk_idxd.a 00:03:38.910 SO libspdk_idxd.so.12.0 00:03:38.910 SYMLINK libspdk_idxd.so 00:03:38.910 LIB libspdk_vmd.a 00:03:38.910 SO libspdk_vmd.so.6.0 00:03:38.910 SYMLINK libspdk_vmd.so 00:03:39.168 LIB libspdk_jsonrpc.a 00:03:39.168 SO libspdk_jsonrpc.so.6.0 00:03:39.168 SYMLINK libspdk_jsonrpc.so 00:03:39.426 CC lib/rpc/rpc.o 00:03:39.685 LIB libspdk_rpc.a 00:03:39.685 SO libspdk_rpc.so.6.0 00:03:39.685 SYMLINK libspdk_rpc.so 00:03:39.685 CC lib/keyring/keyring.o 00:03:39.685 CC lib/keyring/keyring_rpc.o 00:03:39.685 CC lib/notify/notify.o 00:03:39.685 CC lib/notify/notify_rpc.o 00:03:39.685 CC lib/trace/trace.o 00:03:39.685 CC lib/trace/trace_flags.o 00:03:39.685 CC lib/trace/trace_rpc.o 00:03:39.943 LIB libspdk_notify.a 00:03:39.943 SO libspdk_notify.so.6.0 00:03:39.943 LIB libspdk_keyring.a 00:03:39.943 SYMLINK libspdk_notify.so 00:03:40.201 LIB libspdk_trace.a 00:03:40.201 SO libspdk_keyring.so.1.0 00:03:40.201 SO libspdk_trace.so.10.0 00:03:40.201 SYMLINK libspdk_keyring.so 00:03:40.201 SYMLINK libspdk_trace.so 00:03:40.201 CC lib/sock/sock.o 00:03:40.201 CC lib/sock/sock_rpc.o 00:03:40.459 CC lib/thread/thread.o 00:03:40.459 CC lib/thread/iobuf.o 00:03:40.459 LIB libspdk_env_dpdk.a 00:03:40.459 SO libspdk_env_dpdk.so.14.1 00:03:40.717 SYMLINK libspdk_env_dpdk.so 00:03:40.717 LIB libspdk_sock.a 00:03:40.717 SO libspdk_sock.so.10.0 00:03:40.717 SYMLINK libspdk_sock.so 00:03:40.975 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:40.975 CC lib/nvme/nvme_ctrlr.o 00:03:40.975 CC lib/nvme/nvme_fabric.o 00:03:40.975 CC lib/nvme/nvme_ns_cmd.o 00:03:40.975 CC lib/nvme/nvme_ns.o 00:03:40.975 CC lib/nvme/nvme_pcie_common.o 00:03:40.975 CC lib/nvme/nvme_pcie.o 00:03:40.975 CC lib/nvme/nvme_qpair.o 00:03:40.975 CC lib/nvme/nvme.o 00:03:40.975 CC lib/nvme/nvme_quirks.o 00:03:40.975 CC lib/nvme/nvme_transport.o 00:03:40.975 CC lib/nvme/nvme_discovery.o 00:03:40.975 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:40.975 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:40.975 CC lib/nvme/nvme_tcp.o 00:03:40.975 CC lib/nvme/nvme_opal.o 00:03:40.975 CC lib/nvme/nvme_io_msg.o 00:03:40.975 CC lib/nvme/nvme_poll_group.o 00:03:40.975 CC lib/nvme/nvme_zns.o 00:03:40.975 CC lib/nvme/nvme_stubs.o 00:03:40.975 CC lib/nvme/nvme_auth.o 00:03:40.975 CC lib/nvme/nvme_cuse.o 00:03:40.975 CC lib/nvme/nvme_vfio_user.o 00:03:40.975 CC lib/nvme/nvme_rdma.o 00:03:41.952 LIB libspdk_thread.a 00:03:41.952 SO libspdk_thread.so.10.1 00:03:42.210 SYMLINK libspdk_thread.so 00:03:42.210 CC lib/virtio/virtio.o 00:03:42.210 CC lib/accel/accel.o 00:03:42.210 CC lib/vfu_tgt/tgt_endpoint.o 00:03:42.210 CC lib/init/json_config.o 00:03:42.210 CC lib/accel/accel_rpc.o 00:03:42.210 CC lib/virtio/virtio_vhost_user.o 00:03:42.210 CC lib/blob/blobstore.o 00:03:42.210 CC lib/vfu_tgt/tgt_rpc.o 00:03:42.210 CC lib/blob/request.o 00:03:42.210 CC lib/init/subsystem.o 00:03:42.210 CC lib/virtio/virtio_vfio_user.o 00:03:42.210 CC lib/accel/accel_sw.o 00:03:42.210 CC lib/init/subsystem_rpc.o 00:03:42.210 CC lib/blob/zeroes.o 00:03:42.210 CC lib/virtio/virtio_pci.o 00:03:42.210 CC lib/blob/blob_bs_dev.o 00:03:42.210 CC lib/init/rpc.o 00:03:42.467 LIB libspdk_init.a 00:03:42.467 SO libspdk_init.so.5.0 00:03:42.725 LIB libspdk_vfu_tgt.a 00:03:42.726 LIB libspdk_virtio.a 00:03:42.726 SYMLINK libspdk_init.so 00:03:42.726 SO libspdk_vfu_tgt.so.3.0 00:03:42.726 SO libspdk_virtio.so.7.0 00:03:42.726 SYMLINK libspdk_vfu_tgt.so 00:03:42.726 SYMLINK libspdk_virtio.so 00:03:42.726 CC lib/event/app.o 00:03:42.726 CC lib/event/reactor.o 00:03:42.726 CC lib/event/log_rpc.o 00:03:42.726 CC lib/event/app_rpc.o 00:03:42.726 CC lib/event/scheduler_static.o 00:03:43.292 LIB libspdk_event.a 00:03:43.292 SO libspdk_event.so.14.0 00:03:43.292 LIB libspdk_accel.a 00:03:43.292 SYMLINK libspdk_event.so 00:03:43.292 SO libspdk_accel.so.15.1 00:03:43.292 LIB libspdk_nvme.a 00:03:43.292 SYMLINK libspdk_accel.so 00:03:43.550 SO libspdk_nvme.so.13.1 00:03:43.550 CC lib/bdev/bdev.o 00:03:43.550 CC lib/bdev/bdev_rpc.o 00:03:43.550 CC lib/bdev/bdev_zone.o 00:03:43.550 CC lib/bdev/part.o 00:03:43.550 CC lib/bdev/scsi_nvme.o 00:03:43.809 SYMLINK libspdk_nvme.so 00:03:45.185 LIB libspdk_blob.a 00:03:45.443 SO libspdk_blob.so.11.0 00:03:45.443 SYMLINK libspdk_blob.so 00:03:45.703 CC lib/lvol/lvol.o 00:03:45.703 CC lib/blobfs/blobfs.o 00:03:45.703 CC lib/blobfs/tree.o 00:03:46.270 LIB libspdk_bdev.a 00:03:46.270 SO libspdk_bdev.so.15.1 00:03:46.270 SYMLINK libspdk_bdev.so 00:03:46.536 LIB libspdk_blobfs.a 00:03:46.536 SO libspdk_blobfs.so.10.0 00:03:46.536 CC lib/nvmf/ctrlr.o 00:03:46.536 CC lib/ublk/ublk.o 00:03:46.536 CC lib/nbd/nbd.o 00:03:46.536 CC lib/scsi/dev.o 00:03:46.536 CC lib/nvmf/ctrlr_discovery.o 00:03:46.536 CC lib/scsi/lun.o 00:03:46.536 CC lib/nbd/nbd_rpc.o 00:03:46.536 CC lib/ublk/ublk_rpc.o 00:03:46.536 CC lib/ftl/ftl_core.o 00:03:46.536 CC lib/nvmf/ctrlr_bdev.o 00:03:46.536 CC lib/scsi/port.o 00:03:46.536 CC lib/nvmf/subsystem.o 00:03:46.536 CC lib/scsi/scsi.o 00:03:46.536 CC lib/ftl/ftl_init.o 00:03:46.536 CC lib/scsi/scsi_bdev.o 00:03:46.536 CC lib/ftl/ftl_layout.o 00:03:46.536 CC lib/nvmf/nvmf_rpc.o 00:03:46.536 CC lib/nvmf/nvmf.o 00:03:46.536 CC lib/ftl/ftl_io.o 00:03:46.536 CC lib/ftl/ftl_debug.o 00:03:46.536 CC lib/nvmf/transport.o 00:03:46.536 CC lib/scsi/scsi_rpc.o 00:03:46.536 CC lib/scsi/scsi_pr.o 00:03:46.536 CC lib/ftl/ftl_sb.o 00:03:46.536 CC lib/nvmf/tcp.o 00:03:46.536 CC lib/scsi/task.o 00:03:46.536 CC lib/ftl/ftl_l2p.o 00:03:46.536 CC lib/nvmf/stubs.o 00:03:46.536 CC lib/ftl/ftl_l2p_flat.o 00:03:46.536 CC lib/nvmf/mdns_server.o 00:03:46.536 CC lib/nvmf/vfio_user.o 00:03:46.536 CC lib/ftl/ftl_nv_cache.o 00:03:46.536 CC lib/nvmf/rdma.o 00:03:46.536 CC lib/ftl/ftl_band.o 00:03:46.536 CC lib/ftl/ftl_band_ops.o 00:03:46.536 CC lib/nvmf/auth.o 00:03:46.536 CC lib/ftl/ftl_writer.o 00:03:46.536 CC lib/ftl/ftl_rq.o 00:03:46.536 CC lib/ftl/ftl_reloc.o 00:03:46.536 CC lib/ftl/ftl_l2p_cache.o 00:03:46.536 CC lib/ftl/ftl_p2l.o 00:03:46.536 CC lib/ftl/mngt/ftl_mngt.o 00:03:46.536 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:46.536 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:46.536 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:46.536 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:46.536 SYMLINK libspdk_blobfs.so 00:03:46.536 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:46.536 LIB libspdk_lvol.a 00:03:46.536 SO libspdk_lvol.so.10.0 00:03:46.799 SYMLINK libspdk_lvol.so 00:03:46.799 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:46.799 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:46.799 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:46.799 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:46.799 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:46.799 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:46.799 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:46.799 CC lib/ftl/utils/ftl_conf.o 00:03:46.799 CC lib/ftl/utils/ftl_md.o 00:03:46.799 CC lib/ftl/utils/ftl_mempool.o 00:03:46.799 CC lib/ftl/utils/ftl_bitmap.o 00:03:46.799 CC lib/ftl/utils/ftl_property.o 00:03:46.799 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:47.061 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:47.061 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:47.061 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:47.061 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:47.061 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:47.061 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:47.061 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:47.061 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:47.061 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:47.061 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:47.061 CC lib/ftl/base/ftl_base_dev.o 00:03:47.061 CC lib/ftl/base/ftl_base_bdev.o 00:03:47.061 CC lib/ftl/ftl_trace.o 00:03:47.320 LIB libspdk_nbd.a 00:03:47.320 SO libspdk_nbd.so.7.0 00:03:47.320 SYMLINK libspdk_nbd.so 00:03:47.320 LIB libspdk_scsi.a 00:03:47.578 SO libspdk_scsi.so.9.0 00:03:47.578 LIB libspdk_ublk.a 00:03:47.578 SO libspdk_ublk.so.3.0 00:03:47.578 SYMLINK libspdk_scsi.so 00:03:47.578 SYMLINK libspdk_ublk.so 00:03:47.836 CC lib/vhost/vhost.o 00:03:47.836 CC lib/iscsi/conn.o 00:03:47.836 CC lib/vhost/vhost_rpc.o 00:03:47.836 CC lib/iscsi/init_grp.o 00:03:47.836 CC lib/vhost/vhost_scsi.o 00:03:47.836 CC lib/iscsi/iscsi.o 00:03:47.836 CC lib/vhost/vhost_blk.o 00:03:47.836 CC lib/iscsi/md5.o 00:03:47.836 CC lib/vhost/rte_vhost_user.o 00:03:47.836 CC lib/iscsi/param.o 00:03:47.836 CC lib/iscsi/portal_grp.o 00:03:47.836 CC lib/iscsi/tgt_node.o 00:03:47.836 CC lib/iscsi/iscsi_subsystem.o 00:03:47.836 CC lib/iscsi/iscsi_rpc.o 00:03:47.836 CC lib/iscsi/task.o 00:03:48.094 LIB libspdk_ftl.a 00:03:48.094 SO libspdk_ftl.so.9.0 00:03:48.660 SYMLINK libspdk_ftl.so 00:03:48.918 LIB libspdk_vhost.a 00:03:48.918 SO libspdk_vhost.so.8.0 00:03:48.918 LIB libspdk_nvmf.a 00:03:49.176 SO libspdk_nvmf.so.18.1 00:03:49.176 SYMLINK libspdk_vhost.so 00:03:49.176 LIB libspdk_iscsi.a 00:03:49.176 SO libspdk_iscsi.so.8.0 00:03:49.176 SYMLINK libspdk_nvmf.so 00:03:49.435 SYMLINK libspdk_iscsi.so 00:03:49.694 CC module/vfu_device/vfu_virtio.o 00:03:49.694 CC module/vfu_device/vfu_virtio_blk.o 00:03:49.694 CC module/vfu_device/vfu_virtio_scsi.o 00:03:49.694 CC module/vfu_device/vfu_virtio_rpc.o 00:03:49.694 CC module/env_dpdk/env_dpdk_rpc.o 00:03:49.694 CC module/keyring/file/keyring.o 00:03:49.694 CC module/keyring/file/keyring_rpc.o 00:03:49.694 CC module/keyring/linux/keyring.o 00:03:49.694 CC module/keyring/linux/keyring_rpc.o 00:03:49.694 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:49.694 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:49.694 CC module/accel/iaa/accel_iaa.o 00:03:49.694 CC module/accel/iaa/accel_iaa_rpc.o 00:03:49.694 CC module/accel/error/accel_error.o 00:03:49.694 CC module/accel/dsa/accel_dsa.o 00:03:49.694 CC module/scheduler/gscheduler/gscheduler.o 00:03:49.694 CC module/accel/dsa/accel_dsa_rpc.o 00:03:49.694 CC module/accel/error/accel_error_rpc.o 00:03:49.694 CC module/blob/bdev/blob_bdev.o 00:03:49.694 CC module/sock/posix/posix.o 00:03:49.694 CC module/accel/ioat/accel_ioat.o 00:03:49.694 CC module/accel/ioat/accel_ioat_rpc.o 00:03:49.694 LIB libspdk_env_dpdk_rpc.a 00:03:49.953 SO libspdk_env_dpdk_rpc.so.6.0 00:03:49.953 SYMLINK libspdk_env_dpdk_rpc.so 00:03:49.953 LIB libspdk_keyring_linux.a 00:03:49.953 LIB libspdk_keyring_file.a 00:03:49.953 LIB libspdk_scheduler_gscheduler.a 00:03:49.953 LIB libspdk_scheduler_dpdk_governor.a 00:03:49.953 SO libspdk_keyring_file.so.1.0 00:03:49.953 SO libspdk_keyring_linux.so.1.0 00:03:49.953 SO libspdk_scheduler_gscheduler.so.4.0 00:03:49.953 LIB libspdk_accel_error.a 00:03:49.953 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:49.953 LIB libspdk_accel_ioat.a 00:03:49.953 LIB libspdk_scheduler_dynamic.a 00:03:49.953 SO libspdk_accel_error.so.2.0 00:03:49.953 LIB libspdk_accel_iaa.a 00:03:49.953 SO libspdk_accel_ioat.so.6.0 00:03:49.953 SYMLINK libspdk_keyring_file.so 00:03:49.953 SYMLINK libspdk_keyring_linux.so 00:03:49.953 SO libspdk_scheduler_dynamic.so.4.0 00:03:49.953 SYMLINK libspdk_scheduler_gscheduler.so 00:03:49.953 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:49.953 SO libspdk_accel_iaa.so.3.0 00:03:49.953 LIB libspdk_accel_dsa.a 00:03:49.953 SYMLINK libspdk_accel_error.so 00:03:49.953 LIB libspdk_blob_bdev.a 00:03:49.953 SYMLINK libspdk_scheduler_dynamic.so 00:03:49.953 SYMLINK libspdk_accel_ioat.so 00:03:49.953 SO libspdk_accel_dsa.so.5.0 00:03:49.953 SYMLINK libspdk_accel_iaa.so 00:03:49.953 SO libspdk_blob_bdev.so.11.0 00:03:50.212 SYMLINK libspdk_accel_dsa.so 00:03:50.212 SYMLINK libspdk_blob_bdev.so 00:03:50.212 LIB libspdk_vfu_device.a 00:03:50.212 SO libspdk_vfu_device.so.3.0 00:03:50.472 CC module/bdev/lvol/vbdev_lvol.o 00:03:50.472 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:50.472 CC module/bdev/null/bdev_null.o 00:03:50.472 CC module/bdev/error/vbdev_error.o 00:03:50.472 CC module/bdev/nvme/bdev_nvme.o 00:03:50.472 CC module/bdev/null/bdev_null_rpc.o 00:03:50.472 CC module/bdev/error/vbdev_error_rpc.o 00:03:50.472 CC module/bdev/split/vbdev_split.o 00:03:50.472 CC module/bdev/malloc/bdev_malloc.o 00:03:50.472 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:50.472 CC module/bdev/gpt/gpt.o 00:03:50.472 CC module/bdev/delay/vbdev_delay.o 00:03:50.472 CC module/blobfs/bdev/blobfs_bdev.o 00:03:50.472 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:50.473 CC module/bdev/nvme/nvme_rpc.o 00:03:50.473 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:50.473 CC module/bdev/split/vbdev_split_rpc.o 00:03:50.473 CC module/bdev/aio/bdev_aio.o 00:03:50.473 CC module/bdev/nvme/bdev_mdns_client.o 00:03:50.473 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:50.473 CC module/bdev/gpt/vbdev_gpt.o 00:03:50.473 CC module/bdev/raid/bdev_raid.o 00:03:50.473 CC module/bdev/passthru/vbdev_passthru.o 00:03:50.473 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:50.473 CC module/bdev/nvme/vbdev_opal.o 00:03:50.473 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:50.473 CC module/bdev/raid/bdev_raid_rpc.o 00:03:50.473 CC module/bdev/aio/bdev_aio_rpc.o 00:03:50.473 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:50.473 CC module/bdev/raid/bdev_raid_sb.o 00:03:50.473 CC module/bdev/raid/raid0.o 00:03:50.473 CC module/bdev/raid/raid1.o 00:03:50.473 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:50.473 CC module/bdev/raid/concat.o 00:03:50.473 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:50.473 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:50.473 CC module/bdev/iscsi/bdev_iscsi.o 00:03:50.473 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:50.473 CC module/bdev/ftl/bdev_ftl.o 00:03:50.473 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:50.473 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:50.473 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:50.473 SYMLINK libspdk_vfu_device.so 00:03:50.731 LIB libspdk_sock_posix.a 00:03:50.731 SO libspdk_sock_posix.so.6.0 00:03:50.731 LIB libspdk_blobfs_bdev.a 00:03:50.731 LIB libspdk_bdev_gpt.a 00:03:50.731 SO libspdk_blobfs_bdev.so.6.0 00:03:50.731 SO libspdk_bdev_gpt.so.6.0 00:03:50.731 SYMLINK libspdk_sock_posix.so 00:03:50.731 SYMLINK libspdk_blobfs_bdev.so 00:03:50.731 LIB libspdk_bdev_split.a 00:03:50.731 LIB libspdk_bdev_error.a 00:03:50.731 SYMLINK libspdk_bdev_gpt.so 00:03:50.989 SO libspdk_bdev_error.so.6.0 00:03:50.989 SO libspdk_bdev_split.so.6.0 00:03:50.989 LIB libspdk_bdev_null.a 00:03:50.989 SO libspdk_bdev_null.so.6.0 00:03:50.989 LIB libspdk_bdev_ftl.a 00:03:50.989 LIB libspdk_bdev_passthru.a 00:03:50.989 SYMLINK libspdk_bdev_error.so 00:03:50.989 LIB libspdk_bdev_malloc.a 00:03:50.989 SO libspdk_bdev_ftl.so.6.0 00:03:50.989 SO libspdk_bdev_passthru.so.6.0 00:03:50.989 LIB libspdk_bdev_aio.a 00:03:50.989 SYMLINK libspdk_bdev_split.so 00:03:50.989 SYMLINK libspdk_bdev_null.so 00:03:50.989 SO libspdk_bdev_malloc.so.6.0 00:03:50.989 LIB libspdk_bdev_zone_block.a 00:03:50.989 SO libspdk_bdev_aio.so.6.0 00:03:50.989 SO libspdk_bdev_zone_block.so.6.0 00:03:50.989 LIB libspdk_bdev_delay.a 00:03:50.989 SYMLINK libspdk_bdev_ftl.so 00:03:50.989 SYMLINK libspdk_bdev_passthru.so 00:03:50.989 SYMLINK libspdk_bdev_malloc.so 00:03:50.989 LIB libspdk_bdev_iscsi.a 00:03:50.989 SO libspdk_bdev_delay.so.6.0 00:03:50.989 SYMLINK libspdk_bdev_aio.so 00:03:50.989 SO libspdk_bdev_iscsi.so.6.0 00:03:50.989 SYMLINK libspdk_bdev_zone_block.so 00:03:50.989 SYMLINK libspdk_bdev_delay.so 00:03:50.989 SYMLINK libspdk_bdev_iscsi.so 00:03:51.248 LIB libspdk_bdev_virtio.a 00:03:51.248 LIB libspdk_bdev_lvol.a 00:03:51.248 SO libspdk_bdev_virtio.so.6.0 00:03:51.248 SO libspdk_bdev_lvol.so.6.0 00:03:51.248 SYMLINK libspdk_bdev_virtio.so 00:03:51.248 SYMLINK libspdk_bdev_lvol.so 00:03:51.507 LIB libspdk_bdev_raid.a 00:03:51.507 SO libspdk_bdev_raid.so.6.0 00:03:51.507 SYMLINK libspdk_bdev_raid.so 00:03:52.889 LIB libspdk_bdev_nvme.a 00:03:52.889 SO libspdk_bdev_nvme.so.7.0 00:03:52.889 SYMLINK libspdk_bdev_nvme.so 00:03:53.148 CC module/event/subsystems/vmd/vmd.o 00:03:53.148 CC module/event/subsystems/sock/sock.o 00:03:53.148 CC module/event/subsystems/iobuf/iobuf.o 00:03:53.148 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:53.148 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:53.148 CC module/event/subsystems/scheduler/scheduler.o 00:03:53.148 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:53.148 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:53.148 CC module/event/subsystems/keyring/keyring.o 00:03:53.406 LIB libspdk_event_keyring.a 00:03:53.406 LIB libspdk_event_vhost_blk.a 00:03:53.406 LIB libspdk_event_scheduler.a 00:03:53.406 LIB libspdk_event_vfu_tgt.a 00:03:53.406 LIB libspdk_event_vmd.a 00:03:53.406 LIB libspdk_event_sock.a 00:03:53.406 LIB libspdk_event_iobuf.a 00:03:53.406 SO libspdk_event_keyring.so.1.0 00:03:53.406 SO libspdk_event_vhost_blk.so.3.0 00:03:53.406 SO libspdk_event_vfu_tgt.so.3.0 00:03:53.406 SO libspdk_event_sock.so.5.0 00:03:53.406 SO libspdk_event_scheduler.so.4.0 00:03:53.406 SO libspdk_event_vmd.so.6.0 00:03:53.406 SO libspdk_event_iobuf.so.3.0 00:03:53.406 SYMLINK libspdk_event_keyring.so 00:03:53.406 SYMLINK libspdk_event_vhost_blk.so 00:03:53.406 SYMLINK libspdk_event_sock.so 00:03:53.406 SYMLINK libspdk_event_vfu_tgt.so 00:03:53.406 SYMLINK libspdk_event_scheduler.so 00:03:53.406 SYMLINK libspdk_event_vmd.so 00:03:53.406 SYMLINK libspdk_event_iobuf.so 00:03:53.664 CC module/event/subsystems/accel/accel.o 00:03:53.922 LIB libspdk_event_accel.a 00:03:53.922 SO libspdk_event_accel.so.6.0 00:03:53.922 SYMLINK libspdk_event_accel.so 00:03:54.179 CC module/event/subsystems/bdev/bdev.o 00:03:54.179 LIB libspdk_event_bdev.a 00:03:54.179 SO libspdk_event_bdev.so.6.0 00:03:54.437 SYMLINK libspdk_event_bdev.so 00:03:54.437 CC module/event/subsystems/ublk/ublk.o 00:03:54.437 CC module/event/subsystems/scsi/scsi.o 00:03:54.437 CC module/event/subsystems/nbd/nbd.o 00:03:54.437 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:54.437 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:54.695 LIB libspdk_event_ublk.a 00:03:54.695 LIB libspdk_event_nbd.a 00:03:54.695 LIB libspdk_event_scsi.a 00:03:54.695 SO libspdk_event_ublk.so.3.0 00:03:54.695 SO libspdk_event_nbd.so.6.0 00:03:54.695 SO libspdk_event_scsi.so.6.0 00:03:54.695 SYMLINK libspdk_event_ublk.so 00:03:54.695 SYMLINK libspdk_event_nbd.so 00:03:54.695 SYMLINK libspdk_event_scsi.so 00:03:54.695 LIB libspdk_event_nvmf.a 00:03:54.695 SO libspdk_event_nvmf.so.6.0 00:03:54.953 SYMLINK libspdk_event_nvmf.so 00:03:54.953 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:54.953 CC module/event/subsystems/iscsi/iscsi.o 00:03:54.953 LIB libspdk_event_vhost_scsi.a 00:03:54.953 LIB libspdk_event_iscsi.a 00:03:54.953 SO libspdk_event_vhost_scsi.so.3.0 00:03:55.212 SO libspdk_event_iscsi.so.6.0 00:03:55.212 SYMLINK libspdk_event_vhost_scsi.so 00:03:55.212 SYMLINK libspdk_event_iscsi.so 00:03:55.212 SO libspdk.so.6.0 00:03:55.212 SYMLINK libspdk.so 00:03:55.475 CC app/trace_record/trace_record.o 00:03:55.475 TEST_HEADER include/spdk/accel.h 00:03:55.475 TEST_HEADER include/spdk/accel_module.h 00:03:55.475 CXX app/trace/trace.o 00:03:55.475 CC test/rpc_client/rpc_client_test.o 00:03:55.475 TEST_HEADER include/spdk/assert.h 00:03:55.475 TEST_HEADER include/spdk/barrier.h 00:03:55.475 TEST_HEADER include/spdk/base64.h 00:03:55.475 TEST_HEADER include/spdk/bdev.h 00:03:55.475 TEST_HEADER include/spdk/bdev_module.h 00:03:55.475 CC app/spdk_top/spdk_top.o 00:03:55.475 TEST_HEADER include/spdk/bdev_zone.h 00:03:55.475 TEST_HEADER include/spdk/bit_array.h 00:03:55.475 TEST_HEADER include/spdk/bit_pool.h 00:03:55.475 TEST_HEADER include/spdk/blob_bdev.h 00:03:55.475 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:55.475 TEST_HEADER include/spdk/blobfs.h 00:03:55.475 CC app/spdk_nvme_perf/perf.o 00:03:55.475 TEST_HEADER include/spdk/blob.h 00:03:55.475 TEST_HEADER include/spdk/conf.h 00:03:55.476 CC app/spdk_nvme_discover/discovery_aer.o 00:03:55.476 CC app/spdk_nvme_identify/identify.o 00:03:55.476 TEST_HEADER include/spdk/config.h 00:03:55.476 TEST_HEADER include/spdk/cpuset.h 00:03:55.476 CC app/spdk_lspci/spdk_lspci.o 00:03:55.476 TEST_HEADER include/spdk/crc16.h 00:03:55.476 TEST_HEADER include/spdk/crc32.h 00:03:55.476 TEST_HEADER include/spdk/crc64.h 00:03:55.476 TEST_HEADER include/spdk/dif.h 00:03:55.476 TEST_HEADER include/spdk/dma.h 00:03:55.476 TEST_HEADER include/spdk/endian.h 00:03:55.476 TEST_HEADER include/spdk/env_dpdk.h 00:03:55.476 TEST_HEADER include/spdk/env.h 00:03:55.476 TEST_HEADER include/spdk/event.h 00:03:55.476 TEST_HEADER include/spdk/fd_group.h 00:03:55.476 TEST_HEADER include/spdk/fd.h 00:03:55.476 TEST_HEADER include/spdk/file.h 00:03:55.476 TEST_HEADER include/spdk/ftl.h 00:03:55.476 TEST_HEADER include/spdk/gpt_spec.h 00:03:55.476 TEST_HEADER include/spdk/hexlify.h 00:03:55.476 TEST_HEADER include/spdk/histogram_data.h 00:03:55.476 TEST_HEADER include/spdk/idxd.h 00:03:55.476 TEST_HEADER include/spdk/idxd_spec.h 00:03:55.476 TEST_HEADER include/spdk/init.h 00:03:55.476 TEST_HEADER include/spdk/ioat.h 00:03:55.476 TEST_HEADER include/spdk/ioat_spec.h 00:03:55.476 TEST_HEADER include/spdk/iscsi_spec.h 00:03:55.476 TEST_HEADER include/spdk/json.h 00:03:55.476 TEST_HEADER include/spdk/jsonrpc.h 00:03:55.476 TEST_HEADER include/spdk/keyring.h 00:03:55.476 TEST_HEADER include/spdk/keyring_module.h 00:03:55.476 TEST_HEADER include/spdk/likely.h 00:03:55.476 TEST_HEADER include/spdk/log.h 00:03:55.476 TEST_HEADER include/spdk/lvol.h 00:03:55.476 TEST_HEADER include/spdk/memory.h 00:03:55.476 TEST_HEADER include/spdk/mmio.h 00:03:55.476 TEST_HEADER include/spdk/nbd.h 00:03:55.476 TEST_HEADER include/spdk/notify.h 00:03:55.476 TEST_HEADER include/spdk/nvme.h 00:03:55.476 TEST_HEADER include/spdk/nvme_intel.h 00:03:55.476 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:55.476 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:55.476 TEST_HEADER include/spdk/nvme_zns.h 00:03:55.476 TEST_HEADER include/spdk/nvme_spec.h 00:03:55.476 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:55.476 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:55.476 TEST_HEADER include/spdk/nvmf.h 00:03:55.476 TEST_HEADER include/spdk/nvmf_spec.h 00:03:55.476 TEST_HEADER include/spdk/nvmf_transport.h 00:03:55.476 TEST_HEADER include/spdk/opal.h 00:03:55.476 TEST_HEADER include/spdk/opal_spec.h 00:03:55.476 TEST_HEADER include/spdk/pci_ids.h 00:03:55.476 TEST_HEADER include/spdk/pipe.h 00:03:55.476 TEST_HEADER include/spdk/queue.h 00:03:55.476 TEST_HEADER include/spdk/reduce.h 00:03:55.476 TEST_HEADER include/spdk/rpc.h 00:03:55.476 TEST_HEADER include/spdk/scheduler.h 00:03:55.476 TEST_HEADER include/spdk/scsi.h 00:03:55.476 TEST_HEADER include/spdk/sock.h 00:03:55.476 TEST_HEADER include/spdk/scsi_spec.h 00:03:55.476 TEST_HEADER include/spdk/stdinc.h 00:03:55.476 TEST_HEADER include/spdk/string.h 00:03:55.476 TEST_HEADER include/spdk/thread.h 00:03:55.476 TEST_HEADER include/spdk/trace.h 00:03:55.476 TEST_HEADER include/spdk/trace_parser.h 00:03:55.476 TEST_HEADER include/spdk/tree.h 00:03:55.476 TEST_HEADER include/spdk/ublk.h 00:03:55.476 TEST_HEADER include/spdk/util.h 00:03:55.476 TEST_HEADER include/spdk/uuid.h 00:03:55.476 TEST_HEADER include/spdk/version.h 00:03:55.476 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:55.476 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:55.476 TEST_HEADER include/spdk/vhost.h 00:03:55.476 TEST_HEADER include/spdk/vmd.h 00:03:55.476 TEST_HEADER include/spdk/xor.h 00:03:55.476 TEST_HEADER include/spdk/zipf.h 00:03:55.476 CXX test/cpp_headers/accel.o 00:03:55.476 CXX test/cpp_headers/accel_module.o 00:03:55.476 CXX test/cpp_headers/assert.o 00:03:55.476 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:55.476 CXX test/cpp_headers/barrier.o 00:03:55.476 CXX test/cpp_headers/base64.o 00:03:55.476 CXX test/cpp_headers/bdev.o 00:03:55.476 CXX test/cpp_headers/bdev_module.o 00:03:55.476 CXX test/cpp_headers/bdev_zone.o 00:03:55.476 CXX test/cpp_headers/bit_array.o 00:03:55.476 CXX test/cpp_headers/bit_pool.o 00:03:55.476 CXX test/cpp_headers/blob_bdev.o 00:03:55.476 CXX test/cpp_headers/blobfs_bdev.o 00:03:55.476 CXX test/cpp_headers/blobfs.o 00:03:55.476 CXX test/cpp_headers/blob.o 00:03:55.476 CXX test/cpp_headers/conf.o 00:03:55.476 CXX test/cpp_headers/config.o 00:03:55.476 CXX test/cpp_headers/cpuset.o 00:03:55.476 CXX test/cpp_headers/crc16.o 00:03:55.476 CC app/iscsi_tgt/iscsi_tgt.o 00:03:55.476 CC app/spdk_dd/spdk_dd.o 00:03:55.476 CC app/nvmf_tgt/nvmf_main.o 00:03:55.476 CXX test/cpp_headers/crc32.o 00:03:55.476 CC examples/ioat/perf/perf.o 00:03:55.476 CC test/env/vtophys/vtophys.o 00:03:55.476 CC examples/ioat/verify/verify.o 00:03:55.476 CC test/env/pci/pci_ut.o 00:03:55.476 CC test/app/jsoncat/jsoncat.o 00:03:55.476 CC examples/util/zipf/zipf.o 00:03:55.476 CC test/thread/poller_perf/poller_perf.o 00:03:55.476 CC test/app/stub/stub.o 00:03:55.476 CC app/fio/nvme/fio_plugin.o 00:03:55.476 CC app/spdk_tgt/spdk_tgt.o 00:03:55.476 CC test/app/histogram_perf/histogram_perf.o 00:03:55.476 CC test/env/memory/memory_ut.o 00:03:55.476 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:55.734 CC test/app/bdev_svc/bdev_svc.o 00:03:55.734 CC test/dma/test_dma/test_dma.o 00:03:55.734 CC app/fio/bdev/fio_plugin.o 00:03:55.734 CC test/env/mem_callbacks/mem_callbacks.o 00:03:55.734 LINK spdk_lspci 00:03:55.734 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:55.735 LINK rpc_client_test 00:03:55.735 LINK spdk_nvme_discover 00:03:56.019 LINK jsoncat 00:03:56.019 LINK vtophys 00:03:56.019 LINK histogram_perf 00:03:56.019 CXX test/cpp_headers/crc64.o 00:03:56.019 LINK interrupt_tgt 00:03:56.019 LINK zipf 00:03:56.019 LINK poller_perf 00:03:56.019 CXX test/cpp_headers/dif.o 00:03:56.019 LINK spdk_trace_record 00:03:56.019 CXX test/cpp_headers/dma.o 00:03:56.019 LINK env_dpdk_post_init 00:03:56.019 LINK nvmf_tgt 00:03:56.019 CXX test/cpp_headers/endian.o 00:03:56.019 CXX test/cpp_headers/env_dpdk.o 00:03:56.019 CXX test/cpp_headers/env.o 00:03:56.019 CXX test/cpp_headers/event.o 00:03:56.019 CXX test/cpp_headers/fd_group.o 00:03:56.019 CXX test/cpp_headers/fd.o 00:03:56.019 CXX test/cpp_headers/file.o 00:03:56.019 CXX test/cpp_headers/ftl.o 00:03:56.019 CXX test/cpp_headers/gpt_spec.o 00:03:56.019 LINK iscsi_tgt 00:03:56.019 CXX test/cpp_headers/hexlify.o 00:03:56.019 LINK stub 00:03:56.019 CXX test/cpp_headers/histogram_data.o 00:03:56.019 CXX test/cpp_headers/idxd.o 00:03:56.019 LINK bdev_svc 00:03:56.019 LINK ioat_perf 00:03:56.019 LINK verify 00:03:56.019 CXX test/cpp_headers/idxd_spec.o 00:03:56.019 LINK spdk_tgt 00:03:56.019 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:56.019 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:56.019 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:56.019 CXX test/cpp_headers/init.o 00:03:56.325 LINK mem_callbacks 00:03:56.325 CXX test/cpp_headers/ioat.o 00:03:56.325 CXX test/cpp_headers/ioat_spec.o 00:03:56.325 CXX test/cpp_headers/iscsi_spec.o 00:03:56.325 CXX test/cpp_headers/json.o 00:03:56.325 CXX test/cpp_headers/jsonrpc.o 00:03:56.325 LINK spdk_dd 00:03:56.325 CXX test/cpp_headers/keyring.o 00:03:56.325 LINK spdk_trace 00:03:56.325 CXX test/cpp_headers/keyring_module.o 00:03:56.325 LINK pci_ut 00:03:56.325 CXX test/cpp_headers/likely.o 00:03:56.325 CXX test/cpp_headers/log.o 00:03:56.325 CXX test/cpp_headers/lvol.o 00:03:56.325 CXX test/cpp_headers/memory.o 00:03:56.325 CXX test/cpp_headers/mmio.o 00:03:56.325 CXX test/cpp_headers/nbd.o 00:03:56.325 CXX test/cpp_headers/notify.o 00:03:56.325 CXX test/cpp_headers/nvme.o 00:03:56.325 CXX test/cpp_headers/nvme_intel.o 00:03:56.325 CXX test/cpp_headers/nvme_ocssd.o 00:03:56.325 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:56.325 CXX test/cpp_headers/nvme_spec.o 00:03:56.325 CXX test/cpp_headers/nvme_zns.o 00:03:56.325 CXX test/cpp_headers/nvmf_cmd.o 00:03:56.325 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:56.325 CXX test/cpp_headers/nvmf.o 00:03:56.325 CXX test/cpp_headers/nvmf_spec.o 00:03:56.325 CXX test/cpp_headers/nvmf_transport.o 00:03:56.325 CXX test/cpp_headers/opal.o 00:03:56.325 CXX test/cpp_headers/opal_spec.o 00:03:56.325 LINK test_dma 00:03:56.594 CXX test/cpp_headers/pci_ids.o 00:03:56.594 CC examples/sock/hello_world/hello_sock.o 00:03:56.594 CXX test/cpp_headers/pipe.o 00:03:56.594 CXX test/cpp_headers/queue.o 00:03:56.594 LINK nvme_fuzz 00:03:56.594 CC examples/thread/thread/thread_ex.o 00:03:56.594 CXX test/cpp_headers/reduce.o 00:03:56.594 CC test/event/reactor/reactor.o 00:03:56.594 CXX test/cpp_headers/rpc.o 00:03:56.594 CXX test/cpp_headers/scheduler.o 00:03:56.594 CC test/event/event_perf/event_perf.o 00:03:56.594 CC examples/vmd/lsvmd/lsvmd.o 00:03:56.594 CC examples/idxd/perf/perf.o 00:03:56.594 CC test/event/reactor_perf/reactor_perf.o 00:03:56.594 CXX test/cpp_headers/scsi.o 00:03:56.594 CXX test/cpp_headers/scsi_spec.o 00:03:56.594 CC examples/vmd/led/led.o 00:03:56.594 CC test/event/app_repeat/app_repeat.o 00:03:56.594 LINK spdk_nvme 00:03:56.852 LINK spdk_bdev 00:03:56.852 CXX test/cpp_headers/sock.o 00:03:56.852 CXX test/cpp_headers/stdinc.o 00:03:56.852 CXX test/cpp_headers/string.o 00:03:56.852 CXX test/cpp_headers/thread.o 00:03:56.852 CXX test/cpp_headers/trace.o 00:03:56.852 CXX test/cpp_headers/trace_parser.o 00:03:56.852 CXX test/cpp_headers/tree.o 00:03:56.852 CXX test/cpp_headers/ublk.o 00:03:56.852 CXX test/cpp_headers/util.o 00:03:56.852 CC test/event/scheduler/scheduler.o 00:03:56.852 CXX test/cpp_headers/uuid.o 00:03:56.852 CXX test/cpp_headers/version.o 00:03:56.852 CXX test/cpp_headers/vfio_user_pci.o 00:03:56.852 CXX test/cpp_headers/vfio_user_spec.o 00:03:56.852 CXX test/cpp_headers/vhost.o 00:03:56.852 CXX test/cpp_headers/vmd.o 00:03:56.852 CXX test/cpp_headers/xor.o 00:03:56.852 CXX test/cpp_headers/zipf.o 00:03:56.852 CC app/vhost/vhost.o 00:03:56.852 LINK vhost_fuzz 00:03:56.852 LINK memory_ut 00:03:56.852 LINK lsvmd 00:03:56.852 LINK reactor 00:03:56.852 LINK event_perf 00:03:56.852 LINK reactor_perf 00:03:56.852 LINK spdk_nvme_perf 00:03:57.113 LINK spdk_nvme_identify 00:03:57.113 LINK app_repeat 00:03:57.113 LINK led 00:03:57.113 LINK hello_sock 00:03:57.113 LINK spdk_top 00:03:57.113 LINK thread 00:03:57.113 CC test/nvme/reset/reset.o 00:03:57.113 CC test/nvme/simple_copy/simple_copy.o 00:03:57.113 CC test/nvme/startup/startup.o 00:03:57.113 CC test/nvme/boot_partition/boot_partition.o 00:03:57.113 CC test/nvme/sgl/sgl.o 00:03:57.113 CC test/nvme/connect_stress/connect_stress.o 00:03:57.113 CC test/nvme/reserve/reserve.o 00:03:57.113 CC test/nvme/aer/aer.o 00:03:57.113 CC test/nvme/err_injection/err_injection.o 00:03:57.113 CC test/nvme/e2edp/nvme_dp.o 00:03:57.113 CC test/nvme/compliance/nvme_compliance.o 00:03:57.113 CC test/nvme/overhead/overhead.o 00:03:57.113 CC test/blobfs/mkfs/mkfs.o 00:03:57.372 CC test/accel/dif/dif.o 00:03:57.372 LINK vhost 00:03:57.372 CC test/nvme/fused_ordering/fused_ordering.o 00:03:57.372 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:57.372 CC test/nvme/fdp/fdp.o 00:03:57.372 CC test/lvol/esnap/esnap.o 00:03:57.372 LINK scheduler 00:03:57.372 CC test/nvme/cuse/cuse.o 00:03:57.372 LINK idxd_perf 00:03:57.372 LINK boot_partition 00:03:57.372 LINK connect_stress 00:03:57.372 LINK reserve 00:03:57.630 LINK fused_ordering 00:03:57.630 LINK sgl 00:03:57.630 LINK startup 00:03:57.630 CC examples/nvme/hello_world/hello_world.o 00:03:57.630 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:57.630 CC examples/nvme/reconnect/reconnect.o 00:03:57.630 CC examples/nvme/hotplug/hotplug.o 00:03:57.630 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:57.630 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:57.630 CC examples/nvme/arbitration/arbitration.o 00:03:57.630 CC examples/nvme/abort/abort.o 00:03:57.630 LINK reset 00:03:57.630 LINK nvme_dp 00:03:57.630 LINK err_injection 00:03:57.630 LINK mkfs 00:03:57.630 LINK aer 00:03:57.630 LINK simple_copy 00:03:57.630 LINK doorbell_aers 00:03:57.630 LINK overhead 00:03:57.888 CC examples/accel/perf/accel_perf.o 00:03:57.888 LINK fdp 00:03:57.888 LINK pmr_persistence 00:03:57.888 CC examples/blob/cli/blobcli.o 00:03:57.888 LINK nvme_compliance 00:03:57.888 CC examples/blob/hello_world/hello_blob.o 00:03:57.888 LINK dif 00:03:57.888 LINK hotplug 00:03:57.888 LINK cmb_copy 00:03:57.888 LINK hello_world 00:03:57.888 LINK reconnect 00:03:57.888 LINK abort 00:03:57.888 LINK arbitration 00:03:58.145 LINK hello_blob 00:03:58.145 LINK nvme_manage 00:03:58.145 CC test/bdev/bdevio/bdevio.o 00:03:58.145 LINK accel_perf 00:03:58.403 LINK blobcli 00:03:58.403 LINK iscsi_fuzz 00:03:58.660 CC examples/bdev/hello_world/hello_bdev.o 00:03:58.660 CC examples/bdev/bdevperf/bdevperf.o 00:03:58.660 LINK bdevio 00:03:58.917 LINK cuse 00:03:58.917 LINK hello_bdev 00:03:59.483 LINK bdevperf 00:03:59.740 CC examples/nvmf/nvmf/nvmf.o 00:03:59.996 LINK nvmf 00:04:02.527 LINK esnap 00:04:03.096 00:04:03.096 real 0m41.664s 00:04:03.096 user 7m22.594s 00:04:03.096 sys 1m49.918s 00:04:03.096 00:49:52 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:04:03.096 00:49:52 make -- common/autotest_common.sh@10 -- $ set +x 00:04:03.096 ************************************ 00:04:03.096 END TEST make 00:04:03.096 ************************************ 00:04:03.096 00:49:52 -- common/autotest_common.sh@1142 -- $ return 0 00:04:03.096 00:49:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:03.096 00:49:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:03.096 00:49:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:03.096 00:49:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.096 00:49:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:03.096 00:49:52 -- pm/common@44 -- $ pid=902785 00:04:03.096 00:49:52 -- pm/common@50 -- $ kill -TERM 902785 00:04:03.096 00:49:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.096 00:49:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:03.096 00:49:52 -- pm/common@44 -- $ pid=902787 00:04:03.096 00:49:52 -- pm/common@50 -- $ kill -TERM 902787 00:04:03.096 00:49:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.096 00:49:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:03.096 00:49:52 -- pm/common@44 -- $ pid=902789 00:04:03.096 00:49:52 -- pm/common@50 -- $ kill -TERM 902789 00:04:03.096 00:49:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.096 00:49:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:03.096 00:49:52 -- pm/common@44 -- $ pid=902819 00:04:03.096 00:49:52 -- pm/common@50 -- $ sudo -E kill -TERM 902819 00:04:03.096 00:49:52 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:03.096 00:49:52 -- nvmf/common.sh@7 -- # uname -s 00:04:03.096 00:49:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:03.096 00:49:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:03.096 00:49:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:03.096 00:49:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:03.096 00:49:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:03.096 00:49:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:03.096 00:49:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:03.096 00:49:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:03.096 00:49:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:03.096 00:49:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:03.096 00:49:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:03.096 00:49:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:03.096 00:49:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:03.096 00:49:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:03.096 00:49:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:03.096 00:49:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:03.096 00:49:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:03.096 00:49:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:03.096 00:49:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:03.096 00:49:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:03.096 00:49:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.096 00:49:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.096 00:49:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.096 00:49:52 -- paths/export.sh@5 -- # export PATH 00:04:03.096 00:49:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.096 00:49:52 -- nvmf/common.sh@47 -- # : 0 00:04:03.096 00:49:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:03.097 00:49:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:03.097 00:49:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:03.097 00:49:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:03.097 00:49:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:03.097 00:49:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:03.097 00:49:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:03.097 00:49:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:03.097 00:49:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:03.097 00:49:52 -- spdk/autotest.sh@32 -- # uname -s 00:04:03.097 00:49:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:03.097 00:49:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:03.097 00:49:52 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:03.097 00:49:52 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:03.097 00:49:52 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:03.097 00:49:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:03.097 00:49:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:03.097 00:49:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:03.097 00:49:52 -- spdk/autotest.sh@48 -- # udevadm_pid=979486 00:04:03.097 00:49:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:03.097 00:49:52 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:03.097 00:49:52 -- pm/common@17 -- # local monitor 00:04:03.097 00:49:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.097 00:49:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.097 00:49:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.097 00:49:52 -- pm/common@21 -- # date +%s 00:04:03.097 00:49:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:03.097 00:49:52 -- pm/common@21 -- # date +%s 00:04:03.097 00:49:52 -- pm/common@25 -- # sleep 1 00:04:03.097 00:49:52 -- pm/common@21 -- # date +%s 00:04:03.097 00:49:52 -- pm/common@21 -- # date +%s 00:04:03.097 00:49:52 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720910992 00:04:03.097 00:49:52 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720910992 00:04:03.097 00:49:52 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720910992 00:04:03.097 00:49:52 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720910992 00:04:03.097 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720910992_collect-vmstat.pm.log 00:04:03.097 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720910992_collect-cpu-load.pm.log 00:04:03.097 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720910992_collect-cpu-temp.pm.log 00:04:03.097 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720910992_collect-bmc-pm.bmc.pm.log 00:04:04.032 00:49:53 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:04.032 00:49:53 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:04.032 00:49:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:04.032 00:49:53 -- common/autotest_common.sh@10 -- # set +x 00:04:04.032 00:49:53 -- spdk/autotest.sh@59 -- # create_test_list 00:04:04.032 00:49:53 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:04.032 00:49:53 -- common/autotest_common.sh@10 -- # set +x 00:04:04.032 00:49:53 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:04.032 00:49:53 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:04.032 00:49:53 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:04.032 00:49:53 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:04.032 00:49:53 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:04.032 00:49:53 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:04.032 00:49:53 -- common/autotest_common.sh@1455 -- # uname 00:04:04.032 00:49:53 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:04.032 00:49:53 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:04.032 00:49:53 -- common/autotest_common.sh@1475 -- # uname 00:04:04.032 00:49:53 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:04.032 00:49:53 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:04.032 00:49:53 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:04.032 00:49:53 -- spdk/autotest.sh@72 -- # hash lcov 00:04:04.032 00:49:53 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:04.032 00:49:53 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:04.032 --rc lcov_branch_coverage=1 00:04:04.032 --rc lcov_function_coverage=1 00:04:04.032 --rc genhtml_branch_coverage=1 00:04:04.032 --rc genhtml_function_coverage=1 00:04:04.032 --rc genhtml_legend=1 00:04:04.032 --rc geninfo_all_blocks=1 00:04:04.032 ' 00:04:04.032 00:49:53 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:04.032 --rc lcov_branch_coverage=1 00:04:04.032 --rc lcov_function_coverage=1 00:04:04.032 --rc genhtml_branch_coverage=1 00:04:04.032 --rc genhtml_function_coverage=1 00:04:04.032 --rc genhtml_legend=1 00:04:04.032 --rc geninfo_all_blocks=1 00:04:04.032 ' 00:04:04.032 00:49:53 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:04.032 --rc lcov_branch_coverage=1 00:04:04.032 --rc lcov_function_coverage=1 00:04:04.032 --rc genhtml_branch_coverage=1 00:04:04.032 --rc genhtml_function_coverage=1 00:04:04.032 --rc genhtml_legend=1 00:04:04.032 --rc geninfo_all_blocks=1 00:04:04.032 --no-external' 00:04:04.032 00:49:53 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:04.032 --rc lcov_branch_coverage=1 00:04:04.032 --rc lcov_function_coverage=1 00:04:04.032 --rc genhtml_branch_coverage=1 00:04:04.032 --rc genhtml_function_coverage=1 00:04:04.032 --rc genhtml_legend=1 00:04:04.032 --rc geninfo_all_blocks=1 00:04:04.032 --no-external' 00:04:04.032 00:49:53 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:04.289 lcov: LCOV version 1.14 00:04:04.289 00:49:53 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:09.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:09.558 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:09.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:09.558 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:09.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:09.558 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:09.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:09.558 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:09.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:09.558 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:09.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:09.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:09.559 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:09.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:09.818 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:31.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:31.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:37.035 00:50:26 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:37.035 00:50:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:37.035 00:50:26 -- common/autotest_common.sh@10 -- # set +x 00:04:37.035 00:50:26 -- spdk/autotest.sh@91 -- # rm -f 00:04:37.035 00:50:26 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.411 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:38.411 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:38.411 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:38.411 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:38.411 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:38.411 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:38.411 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:38.411 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:38.411 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:38.411 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:38.411 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:38.411 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:38.411 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:38.411 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:38.411 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:38.411 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:38.411 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:38.411 00:50:27 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:38.411 00:50:27 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:38.411 00:50:27 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:38.411 00:50:27 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:38.411 00:50:27 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:38.411 00:50:27 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:38.411 00:50:27 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:38.411 00:50:27 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:38.411 00:50:27 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:38.411 00:50:27 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:38.411 00:50:27 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:38.411 00:50:27 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:38.411 00:50:27 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:38.411 00:50:27 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:38.411 00:50:27 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:38.670 No valid GPT data, bailing 00:04:38.670 00:50:27 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:38.670 00:50:27 -- scripts/common.sh@391 -- # pt= 00:04:38.670 00:50:27 -- scripts/common.sh@392 -- # return 1 00:04:38.670 00:50:27 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:38.670 1+0 records in 00:04:38.670 1+0 records out 00:04:38.670 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00255465 s, 410 MB/s 00:04:38.670 00:50:27 -- spdk/autotest.sh@118 -- # sync 00:04:38.670 00:50:27 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:38.670 00:50:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:38.670 00:50:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:40.573 00:50:29 -- spdk/autotest.sh@124 -- # uname -s 00:04:40.573 00:50:29 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:40.573 00:50:29 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:40.573 00:50:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.573 00:50:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.573 00:50:29 -- common/autotest_common.sh@10 -- # set +x 00:04:40.573 ************************************ 00:04:40.573 START TEST setup.sh 00:04:40.573 ************************************ 00:04:40.573 00:50:29 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:40.573 * Looking for test storage... 00:04:40.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:40.573 00:50:29 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:40.573 00:50:29 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:40.573 00:50:29 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:40.573 00:50:29 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.573 00:50:29 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.573 00:50:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:40.573 ************************************ 00:04:40.573 START TEST acl 00:04:40.573 ************************************ 00:04:40.573 00:50:29 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:40.573 * Looking for test storage... 00:04:40.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:40.573 00:50:29 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:40.573 00:50:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:40.573 00:50:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:40.573 00:50:29 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:40.573 00:50:29 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:40.573 00:50:29 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:40.573 00:50:29 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:40.573 00:50:29 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:40.573 00:50:29 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:40.573 00:50:29 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:40.573 00:50:29 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:40.573 00:50:29 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:40.573 00:50:29 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:40.573 00:50:29 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:40.573 00:50:29 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:40.573 00:50:29 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.479 00:50:31 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:42.479 00:50:31 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:42.479 00:50:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.479 00:50:31 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:42.479 00:50:31 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.479 00:50:31 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:43.417 Hugepages 00:04:43.417 node hugesize free / total 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.417 00:04:43.417 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.417 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.418 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:43.418 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:43.418 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:43.418 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.418 00:50:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:43.418 00:50:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:43.418 00:50:32 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:43.418 00:50:32 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:43.418 00:50:32 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:43.418 00:50:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:43.418 00:50:32 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:43.418 00:50:32 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:43.418 00:50:32 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.418 00:50:32 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.418 00:50:32 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:43.418 ************************************ 00:04:43.418 START TEST denied 00:04:43.418 ************************************ 00:04:43.418 00:50:32 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:43.418 00:50:32 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:43.418 00:50:32 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:43.418 00:50:32 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:43.418 00:50:32 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.418 00:50:32 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.791 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:44.792 00:50:34 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:44.792 00:50:34 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:44.792 00:50:34 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:44.792 00:50:34 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:44.792 00:50:34 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:44.792 00:50:34 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:44.792 00:50:34 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:44.792 00:50:34 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:44.792 00:50:34 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:44.792 00:50:34 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.332 00:04:47.332 real 0m3.965s 00:04:47.332 user 0m1.213s 00:04:47.332 sys 0m1.860s 00:04:47.332 00:50:36 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.332 00:50:36 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:47.332 ************************************ 00:04:47.332 END TEST denied 00:04:47.332 ************************************ 00:04:47.332 00:50:36 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:47.332 00:50:36 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:47.332 00:50:36 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.332 00:50:36 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.332 00:50:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:47.332 ************************************ 00:04:47.332 START TEST allowed 00:04:47.332 ************************************ 00:04:47.332 00:50:36 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:47.332 00:50:36 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:47.332 00:50:36 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:47.332 00:50:36 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:47.332 00:50:36 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.332 00:50:36 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:49.890 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:49.890 00:50:39 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:49.890 00:50:39 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:49.890 00:50:39 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:49.890 00:50:39 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.890 00:50:39 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:51.272 00:04:51.272 real 0m3.970s 00:04:51.272 user 0m1.030s 00:04:51.272 sys 0m1.715s 00:04:51.272 00:50:40 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.272 00:50:40 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:51.272 ************************************ 00:04:51.272 END TEST allowed 00:04:51.272 ************************************ 00:04:51.272 00:50:40 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:51.272 00:04:51.272 real 0m10.785s 00:04:51.272 user 0m3.388s 00:04:51.272 sys 0m5.356s 00:04:51.272 00:50:40 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.272 00:50:40 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:51.272 ************************************ 00:04:51.272 END TEST acl 00:04:51.272 ************************************ 00:04:51.272 00:50:40 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:51.272 00:50:40 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:51.272 00:50:40 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.272 00:50:40 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.272 00:50:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:51.534 ************************************ 00:04:51.534 START TEST hugepages 00:04:51.534 ************************************ 00:04:51.534 00:50:40 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:51.534 * Looking for test storage... 00:04:51.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41661260 kB' 'MemAvailable: 45170600 kB' 'Buffers: 2704 kB' 'Cached: 12281940 kB' 'SwapCached: 0 kB' 'Active: 9284856 kB' 'Inactive: 3506552 kB' 'Active(anon): 8890504 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510080 kB' 'Mapped: 182420 kB' 'Shmem: 8383740 kB' 'KReclaimable: 204120 kB' 'Slab: 581272 kB' 'SReclaimable: 204120 kB' 'SUnreclaim: 377152 kB' 'KernelStack: 12944 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 10016236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.534 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:51.535 00:50:40 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:51.535 00:50:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.535 00:50:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.535 00:50:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:51.535 ************************************ 00:04:51.535 START TEST default_setup 00:04:51.535 ************************************ 00:04:51.535 00:50:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:51.535 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.536 00:50:40 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.915 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:52.915 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:52.915 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:52.915 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:52.915 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:52.915 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:52.915 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:52.915 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:52.915 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:52.915 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:52.915 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:52.915 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:52.915 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:52.915 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:52.915 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:52.915 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:53.859 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43748620 kB' 'MemAvailable: 47257928 kB' 'Buffers: 2704 kB' 'Cached: 12282032 kB' 'SwapCached: 0 kB' 'Active: 9302676 kB' 'Inactive: 3506552 kB' 'Active(anon): 8908324 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527780 kB' 'Mapped: 182492 kB' 'Shmem: 8383832 kB' 'KReclaimable: 204056 kB' 'Slab: 580508 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376452 kB' 'KernelStack: 12832 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10037060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.859 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.860 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43752112 kB' 'MemAvailable: 47261420 kB' 'Buffers: 2704 kB' 'Cached: 12282032 kB' 'SwapCached: 0 kB' 'Active: 9303376 kB' 'Inactive: 3506552 kB' 'Active(anon): 8909024 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528408 kB' 'Mapped: 182440 kB' 'Shmem: 8383832 kB' 'KReclaimable: 204056 kB' 'Slab: 580500 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376444 kB' 'KernelStack: 12880 kB' 'PageTables: 8212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10037076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.861 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.862 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.863 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43752540 kB' 'MemAvailable: 47261848 kB' 'Buffers: 2704 kB' 'Cached: 12282052 kB' 'SwapCached: 0 kB' 'Active: 9302656 kB' 'Inactive: 3506552 kB' 'Active(anon): 8908304 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527616 kB' 'Mapped: 182440 kB' 'Shmem: 8383852 kB' 'KReclaimable: 204056 kB' 'Slab: 580632 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376576 kB' 'KernelStack: 12864 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10037100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.864 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.865 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:53.866 nr_hugepages=1024 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.866 resv_hugepages=0 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.866 surplus_hugepages=0 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.866 anon_hugepages=0 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.866 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43752448 kB' 'MemAvailable: 47261756 kB' 'Buffers: 2704 kB' 'Cached: 12282052 kB' 'SwapCached: 0 kB' 'Active: 9302360 kB' 'Inactive: 3506552 kB' 'Active(anon): 8908008 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527352 kB' 'Mapped: 182440 kB' 'Shmem: 8383852 kB' 'KReclaimable: 204056 kB' 'Slab: 580632 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376576 kB' 'KernelStack: 12880 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10037120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.867 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.868 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25895664 kB' 'MemUsed: 6934220 kB' 'SwapCached: 0 kB' 'Active: 3548100 kB' 'Inactive: 109764 kB' 'Active(anon): 3437212 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3403872 kB' 'Mapped: 50140 kB' 'AnonPages: 257124 kB' 'Shmem: 3183220 kB' 'KernelStack: 6968 kB' 'PageTables: 4836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93828 kB' 'Slab: 315796 kB' 'SReclaimable: 93828 kB' 'SUnreclaim: 221968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.869 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.870 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:53.871 node0=1024 expecting 1024 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:53.871 00:04:53.871 real 0m2.428s 00:04:53.871 user 0m0.623s 00:04:53.871 sys 0m0.929s 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.871 00:50:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:53.871 ************************************ 00:04:53.871 END TEST default_setup 00:04:53.871 ************************************ 00:04:54.172 00:50:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:54.172 00:50:43 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:54.172 00:50:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.172 00:50:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.172 00:50:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:54.172 ************************************ 00:04:54.172 START TEST per_node_1G_alloc 00:04:54.172 ************************************ 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.172 00:50:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:55.115 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:55.115 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:55.115 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:55.115 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:55.115 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:55.115 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:55.115 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:55.115 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:55.115 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:55.115 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:55.115 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:55.115 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:55.115 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:55.115 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:55.115 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:55.115 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:55.115 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43760524 kB' 'MemAvailable: 47269832 kB' 'Buffers: 2704 kB' 'Cached: 12282136 kB' 'SwapCached: 0 kB' 'Active: 9303008 kB' 'Inactive: 3506552 kB' 'Active(anon): 8908656 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527900 kB' 'Mapped: 182496 kB' 'Shmem: 8383936 kB' 'KReclaimable: 204056 kB' 'Slab: 580784 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376728 kB' 'KernelStack: 12928 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10037296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196820 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.380 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.381 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43760376 kB' 'MemAvailable: 47269684 kB' 'Buffers: 2704 kB' 'Cached: 12282152 kB' 'SwapCached: 0 kB' 'Active: 9302724 kB' 'Inactive: 3506552 kB' 'Active(anon): 8908372 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527568 kB' 'Mapped: 182496 kB' 'Shmem: 8383952 kB' 'KReclaimable: 204056 kB' 'Slab: 580760 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376704 kB' 'KernelStack: 12928 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10037312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.382 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.383 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43760812 kB' 'MemAvailable: 47270120 kB' 'Buffers: 2704 kB' 'Cached: 12282160 kB' 'SwapCached: 0 kB' 'Active: 9303436 kB' 'Inactive: 3506552 kB' 'Active(anon): 8909084 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528264 kB' 'Mapped: 182496 kB' 'Shmem: 8383960 kB' 'KReclaimable: 204056 kB' 'Slab: 580760 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376704 kB' 'KernelStack: 12960 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10037336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196740 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.384 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:55.385 nr_hugepages=1024 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.385 resv_hugepages=0 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.385 surplus_hugepages=0 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.385 anon_hugepages=0 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:55.385 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43761316 kB' 'MemAvailable: 47270624 kB' 'Buffers: 2704 kB' 'Cached: 12282164 kB' 'SwapCached: 0 kB' 'Active: 9302788 kB' 'Inactive: 3506552 kB' 'Active(anon): 8908436 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527612 kB' 'Mapped: 182496 kB' 'Shmem: 8383964 kB' 'KReclaimable: 204056 kB' 'Slab: 580896 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376840 kB' 'KernelStack: 12928 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10037360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196740 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.386 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26959028 kB' 'MemUsed: 5870856 kB' 'SwapCached: 0 kB' 'Active: 3548240 kB' 'Inactive: 109764 kB' 'Active(anon): 3437352 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3403888 kB' 'Mapped: 50196 kB' 'AnonPages: 257280 kB' 'Shmem: 3183236 kB' 'KernelStack: 6984 kB' 'PageTables: 4988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93828 kB' 'Slab: 316016 kB' 'SReclaimable: 93828 kB' 'SUnreclaim: 222188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.387 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.388 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16802216 kB' 'MemUsed: 10909608 kB' 'SwapCached: 0 kB' 'Active: 5755152 kB' 'Inactive: 3396788 kB' 'Active(anon): 5471688 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8881040 kB' 'Mapped: 132300 kB' 'AnonPages: 271000 kB' 'Shmem: 5200788 kB' 'KernelStack: 5976 kB' 'PageTables: 3536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110228 kB' 'Slab: 264880 kB' 'SReclaimable: 110228 kB' 'SUnreclaim: 154652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.389 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:55.390 node0=512 expecting 512 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:55.390 node1=512 expecting 512 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:55.390 00:04:55.390 real 0m1.397s 00:04:55.390 user 0m0.591s 00:04:55.390 sys 0m0.762s 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.390 00:50:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:55.390 ************************************ 00:04:55.390 END TEST per_node_1G_alloc 00:04:55.390 ************************************ 00:04:55.390 00:50:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:55.390 00:50:44 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:55.390 00:50:44 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.390 00:50:44 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.390 00:50:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:55.391 ************************************ 00:04:55.391 START TEST even_2G_alloc 00:04:55.391 ************************************ 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.391 00:50:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.771 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.771 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:56.771 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.771 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.771 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.771 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.771 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.771 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.771 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.771 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.771 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.771 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.771 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.771 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.772 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.772 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.772 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43752368 kB' 'MemAvailable: 47261692 kB' 'Buffers: 2704 kB' 'Cached: 12282288 kB' 'SwapCached: 0 kB' 'Active: 9302952 kB' 'Inactive: 3506552 kB' 'Active(anon): 8908600 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527820 kB' 'Mapped: 182460 kB' 'Shmem: 8384088 kB' 'KReclaimable: 204088 kB' 'Slab: 580744 kB' 'SReclaimable: 204088 kB' 'SUnreclaim: 376656 kB' 'KernelStack: 12912 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10037572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196820 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.772 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43752948 kB' 'MemAvailable: 47262272 kB' 'Buffers: 2704 kB' 'Cached: 12282292 kB' 'SwapCached: 0 kB' 'Active: 9303128 kB' 'Inactive: 3506552 kB' 'Active(anon): 8908776 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527936 kB' 'Mapped: 182460 kB' 'Shmem: 8384092 kB' 'KReclaimable: 204088 kB' 'Slab: 580736 kB' 'SReclaimable: 204088 kB' 'SUnreclaim: 376648 kB' 'KernelStack: 12928 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10037588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196788 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.773 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.774 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43754580 kB' 'MemAvailable: 47263904 kB' 'Buffers: 2704 kB' 'Cached: 12282308 kB' 'SwapCached: 0 kB' 'Active: 9303112 kB' 'Inactive: 3506552 kB' 'Active(anon): 8908760 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527908 kB' 'Mapped: 182460 kB' 'Shmem: 8384108 kB' 'KReclaimable: 204088 kB' 'Slab: 580836 kB' 'SReclaimable: 204088 kB' 'SUnreclaim: 376748 kB' 'KernelStack: 12928 kB' 'PageTables: 8088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10037612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196788 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.775 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.776 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:56.777 nr_hugepages=1024 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.777 resv_hugepages=0 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.777 surplus_hugepages=0 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.777 anon_hugepages=0 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43754928 kB' 'MemAvailable: 47264252 kB' 'Buffers: 2704 kB' 'Cached: 12282328 kB' 'SwapCached: 0 kB' 'Active: 9303128 kB' 'Inactive: 3506552 kB' 'Active(anon): 8908776 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527908 kB' 'Mapped: 182460 kB' 'Shmem: 8384128 kB' 'KReclaimable: 204088 kB' 'Slab: 580828 kB' 'SReclaimable: 204088 kB' 'SUnreclaim: 376740 kB' 'KernelStack: 12928 kB' 'PageTables: 8088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10037632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196788 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.777 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.778 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26948688 kB' 'MemUsed: 5881196 kB' 'SwapCached: 0 kB' 'Active: 3548600 kB' 'Inactive: 109764 kB' 'Active(anon): 3437712 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3403904 kB' 'Mapped: 50160 kB' 'AnonPages: 257620 kB' 'Shmem: 3183252 kB' 'KernelStack: 6968 kB' 'PageTables: 4848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93828 kB' 'Slab: 315960 kB' 'SReclaimable: 93828 kB' 'SUnreclaim: 222132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.779 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16805488 kB' 'MemUsed: 10906336 kB' 'SwapCached: 0 kB' 'Active: 5754736 kB' 'Inactive: 3396788 kB' 'Active(anon): 5471272 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8881332 kB' 'Mapped: 132300 kB' 'AnonPages: 270292 kB' 'Shmem: 5201080 kB' 'KernelStack: 5960 kB' 'PageTables: 3240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110260 kB' 'Slab: 264868 kB' 'SReclaimable: 110260 kB' 'SUnreclaim: 154608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.780 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:56.781 node0=512 expecting 512 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:56.781 node1=512 expecting 512 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:56.781 00:04:56.781 real 0m1.423s 00:04:56.781 user 0m0.609s 00:04:56.781 sys 0m0.774s 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.781 00:50:46 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:56.781 ************************************ 00:04:56.781 END TEST even_2G_alloc 00:04:56.781 ************************************ 00:04:57.040 00:50:46 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:57.040 00:50:46 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:57.040 00:50:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.040 00:50:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.040 00:50:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:57.040 ************************************ 00:04:57.040 START TEST odd_alloc 00:04:57.040 ************************************ 00:04:57.040 00:50:46 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:57.040 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:57.040 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:57.040 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:57.040 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.041 00:50:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.976 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:57.976 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:57.976 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:57.976 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:57.976 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:57.976 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:57.976 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:57.976 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:57.976 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:57.976 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:57.976 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:57.976 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:57.976 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:57.976 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:57.976 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:57.976 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:57.976 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43765916 kB' 'MemAvailable: 47275224 kB' 'Buffers: 2704 kB' 'Cached: 12282580 kB' 'SwapCached: 0 kB' 'Active: 9305636 kB' 'Inactive: 3506552 kB' 'Active(anon): 8911284 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530088 kB' 'Mapped: 182424 kB' 'Shmem: 8384380 kB' 'KReclaimable: 204056 kB' 'Slab: 580536 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376480 kB' 'KernelStack: 12848 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10030064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196760 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.242 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.243 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43762912 kB' 'MemAvailable: 47272220 kB' 'Buffers: 2704 kB' 'Cached: 12282584 kB' 'SwapCached: 0 kB' 'Active: 9301120 kB' 'Inactive: 3506552 kB' 'Active(anon): 8906768 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525584 kB' 'Mapped: 182424 kB' 'Shmem: 8384384 kB' 'KReclaimable: 204056 kB' 'Slab: 580504 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376448 kB' 'KernelStack: 12848 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10025976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.244 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43756584 kB' 'MemAvailable: 47265892 kB' 'Buffers: 2704 kB' 'Cached: 12282600 kB' 'SwapCached: 0 kB' 'Active: 9306184 kB' 'Inactive: 3506552 kB' 'Active(anon): 8911832 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530724 kB' 'Mapped: 182432 kB' 'Shmem: 8384400 kB' 'KReclaimable: 204056 kB' 'Slab: 580500 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376444 kB' 'KernelStack: 12928 kB' 'PageTables: 7624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10031100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196760 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.245 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.246 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:58.247 nr_hugepages=1025 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.247 resv_hugepages=0 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.247 surplus_hugepages=0 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.247 anon_hugepages=0 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43756584 kB' 'MemAvailable: 47265892 kB' 'Buffers: 2704 kB' 'Cached: 12282600 kB' 'SwapCached: 0 kB' 'Active: 9306420 kB' 'Inactive: 3506552 kB' 'Active(anon): 8912068 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530512 kB' 'Mapped: 182848 kB' 'Shmem: 8384400 kB' 'KReclaimable: 204056 kB' 'Slab: 580500 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376444 kB' 'KernelStack: 13248 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10032492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196888 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.247 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.248 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26955356 kB' 'MemUsed: 5874528 kB' 'SwapCached: 0 kB' 'Active: 3546872 kB' 'Inactive: 109764 kB' 'Active(anon): 3435984 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3403976 kB' 'Mapped: 49708 kB' 'AnonPages: 255844 kB' 'Shmem: 3183324 kB' 'KernelStack: 6888 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93796 kB' 'Slab: 315664 kB' 'SReclaimable: 93796 kB' 'SUnreclaim: 221868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.249 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16801036 kB' 'MemUsed: 10910788 kB' 'SwapCached: 0 kB' 'Active: 5754364 kB' 'Inactive: 3396788 kB' 'Active(anon): 5470900 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8881368 kB' 'Mapped: 132320 kB' 'AnonPages: 270252 kB' 'Shmem: 5201116 kB' 'KernelStack: 6248 kB' 'PageTables: 4744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110260 kB' 'Slab: 264828 kB' 'SReclaimable: 110260 kB' 'SUnreclaim: 154568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.250 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.251 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:58.252 node0=512 expecting 513 00:04:58.252 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.252 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.252 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.252 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:58.252 node1=513 expecting 512 00:04:58.252 00:50:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:58.252 00:04:58.252 real 0m1.363s 00:04:58.252 user 0m0.563s 00:04:58.252 sys 0m0.753s 00:04:58.252 00:50:47 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.252 00:50:47 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:58.252 ************************************ 00:04:58.252 END TEST odd_alloc 00:04:58.252 ************************************ 00:04:58.252 00:50:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:58.252 00:50:47 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:58.252 00:50:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.252 00:50:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.252 00:50:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:58.252 ************************************ 00:04:58.252 START TEST custom_alloc 00:04:58.252 ************************************ 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.252 00:50:47 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.635 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.635 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:59.635 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.635 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.635 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.635 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.635 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.635 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.635 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.635 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.635 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.635 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.635 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.635 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.635 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.635 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.635 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42723368 kB' 'MemAvailable: 46232676 kB' 'Buffers: 2704 kB' 'Cached: 12282708 kB' 'SwapCached: 0 kB' 'Active: 9299916 kB' 'Inactive: 3506552 kB' 'Active(anon): 8905564 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524252 kB' 'Mapped: 181676 kB' 'Shmem: 8384508 kB' 'KReclaimable: 204056 kB' 'Slab: 580456 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376400 kB' 'KernelStack: 12864 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10024200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196756 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.635 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42723692 kB' 'MemAvailable: 46233000 kB' 'Buffers: 2704 kB' 'Cached: 12282708 kB' 'SwapCached: 0 kB' 'Active: 9299948 kB' 'Inactive: 3506552 kB' 'Active(anon): 8905596 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524264 kB' 'Mapped: 181620 kB' 'Shmem: 8384508 kB' 'KReclaimable: 204056 kB' 'Slab: 580456 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376400 kB' 'KernelStack: 12880 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10024216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.636 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42723960 kB' 'MemAvailable: 46233268 kB' 'Buffers: 2704 kB' 'Cached: 12282728 kB' 'SwapCached: 0 kB' 'Active: 9299896 kB' 'Inactive: 3506552 kB' 'Active(anon): 8905544 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524192 kB' 'Mapped: 181620 kB' 'Shmem: 8384528 kB' 'KReclaimable: 204056 kB' 'Slab: 580480 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376424 kB' 'KernelStack: 12864 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10024240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.637 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:59.638 nr_hugepages=1536 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.638 resv_hugepages=0 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.638 surplus_hugepages=0 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.638 anon_hugepages=0 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.638 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42724216 kB' 'MemAvailable: 46233524 kB' 'Buffers: 2704 kB' 'Cached: 12282744 kB' 'SwapCached: 0 kB' 'Active: 9299920 kB' 'Inactive: 3506552 kB' 'Active(anon): 8905568 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524236 kB' 'Mapped: 181620 kB' 'Shmem: 8384544 kB' 'KReclaimable: 204056 kB' 'Slab: 580468 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376412 kB' 'KernelStack: 12864 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10024260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:59.639 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26959352 kB' 'MemUsed: 5870532 kB' 'SwapCached: 0 kB' 'Active: 3546872 kB' 'Inactive: 109764 kB' 'Active(anon): 3435984 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3404072 kB' 'Mapped: 49460 kB' 'AnonPages: 255776 kB' 'Shmem: 3183420 kB' 'KernelStack: 6904 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93796 kB' 'Slab: 315656 kB' 'SReclaimable: 93796 kB' 'SUnreclaim: 221860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.640 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15764864 kB' 'MemUsed: 11946960 kB' 'SwapCached: 0 kB' 'Active: 5753212 kB' 'Inactive: 3396788 kB' 'Active(anon): 5469748 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8881380 kB' 'Mapped: 132160 kB' 'AnonPages: 268620 kB' 'Shmem: 5201128 kB' 'KernelStack: 5960 kB' 'PageTables: 3156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110260 kB' 'Slab: 264812 kB' 'SReclaimable: 110260 kB' 'SUnreclaim: 154552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:59.641 node0=512 expecting 512 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:59.641 node1=1024 expecting 1024 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:59.641 00:04:59.641 real 0m1.337s 00:04:59.641 user 0m0.574s 00:04:59.641 sys 0m0.720s 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.641 00:50:48 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:59.641 ************************************ 00:04:59.641 END TEST custom_alloc 00:04:59.641 ************************************ 00:04:59.641 00:50:48 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:59.641 00:50:48 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:59.641 00:50:48 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.641 00:50:48 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.641 00:50:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:59.641 ************************************ 00:04:59.641 START TEST no_shrink_alloc 00:04:59.641 ************************************ 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.641 00:50:48 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:01.036 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:01.036 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:01.036 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:01.036 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:01.036 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:01.036 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:01.036 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:01.036 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:01.036 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:01.036 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:01.036 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:01.036 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:01.036 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:01.036 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:01.036 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:01.036 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:01.036 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43768364 kB' 'MemAvailable: 47277672 kB' 'Buffers: 2704 kB' 'Cached: 12282832 kB' 'SwapCached: 0 kB' 'Active: 9300484 kB' 'Inactive: 3506552 kB' 'Active(anon): 8906132 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525168 kB' 'Mapped: 181768 kB' 'Shmem: 8384632 kB' 'KReclaimable: 204056 kB' 'Slab: 580596 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376540 kB' 'KernelStack: 12880 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10024300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.036 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.037 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43768720 kB' 'MemAvailable: 47278028 kB' 'Buffers: 2704 kB' 'Cached: 12282840 kB' 'SwapCached: 0 kB' 'Active: 9300800 kB' 'Inactive: 3506552 kB' 'Active(anon): 8906448 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525416 kB' 'Mapped: 181712 kB' 'Shmem: 8384640 kB' 'KReclaimable: 204056 kB' 'Slab: 580580 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376524 kB' 'KernelStack: 12880 kB' 'PageTables: 7728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10024684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.038 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43768976 kB' 'MemAvailable: 47278284 kB' 'Buffers: 2704 kB' 'Cached: 12282860 kB' 'SwapCached: 0 kB' 'Active: 9300440 kB' 'Inactive: 3506552 kB' 'Active(anon): 8906088 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525016 kB' 'Mapped: 181636 kB' 'Shmem: 8384660 kB' 'KReclaimable: 204056 kB' 'Slab: 580576 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376520 kB' 'KernelStack: 12848 kB' 'PageTables: 7612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10024708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.039 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.040 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:01.041 nr_hugepages=1024 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.041 resv_hugepages=0 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.041 surplus_hugepages=0 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.041 anon_hugepages=0 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43768976 kB' 'MemAvailable: 47278284 kB' 'Buffers: 2704 kB' 'Cached: 12282880 kB' 'SwapCached: 0 kB' 'Active: 9300700 kB' 'Inactive: 3506552 kB' 'Active(anon): 8906348 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525312 kB' 'Mapped: 181636 kB' 'Shmem: 8384680 kB' 'KReclaimable: 204056 kB' 'Slab: 580576 kB' 'SReclaimable: 204056 kB' 'SUnreclaim: 376520 kB' 'KernelStack: 12896 kB' 'PageTables: 7768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10024728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.041 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.042 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25916252 kB' 'MemUsed: 6913632 kB' 'SwapCached: 0 kB' 'Active: 3547500 kB' 'Inactive: 109764 kB' 'Active(anon): 3436612 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3404224 kB' 'Mapped: 49476 kB' 'AnonPages: 256364 kB' 'Shmem: 3183572 kB' 'KernelStack: 6904 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93796 kB' 'Slab: 315648 kB' 'SReclaimable: 93796 kB' 'SUnreclaim: 221852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.043 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.044 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:01.045 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:01.045 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:01.045 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.045 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:01.045 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:01.045 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.045 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.045 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.045 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.045 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:01.045 node0=1024 expecting 1024 00:05:01.045 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:01.045 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:01.045 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:01.045 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:01.045 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:01.045 00:50:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:02.427 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:02.427 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:02.427 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:02.427 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:02.427 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:02.427 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:02.427 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:02.427 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:02.427 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:02.427 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:02.427 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:02.427 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:02.427 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:02.427 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:02.427 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:02.427 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:02.427 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:02.427 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.427 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43764160 kB' 'MemAvailable: 47273452 kB' 'Buffers: 2704 kB' 'Cached: 12282948 kB' 'SwapCached: 0 kB' 'Active: 9301568 kB' 'Inactive: 3506552 kB' 'Active(anon): 8907216 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525628 kB' 'Mapped: 181700 kB' 'Shmem: 8384748 kB' 'KReclaimable: 204024 kB' 'Slab: 580548 kB' 'SReclaimable: 204024 kB' 'SUnreclaim: 376524 kB' 'KernelStack: 12896 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10024904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.428 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43765056 kB' 'MemAvailable: 47274348 kB' 'Buffers: 2704 kB' 'Cached: 12282948 kB' 'SwapCached: 0 kB' 'Active: 9301596 kB' 'Inactive: 3506552 kB' 'Active(anon): 8907244 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525664 kB' 'Mapped: 181720 kB' 'Shmem: 8384748 kB' 'KReclaimable: 204024 kB' 'Slab: 580620 kB' 'SReclaimable: 204024 kB' 'SUnreclaim: 376596 kB' 'KernelStack: 12928 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10024924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.429 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.430 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43765620 kB' 'MemAvailable: 47274912 kB' 'Buffers: 2704 kB' 'Cached: 12282968 kB' 'SwapCached: 0 kB' 'Active: 9301388 kB' 'Inactive: 3506552 kB' 'Active(anon): 8907036 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525464 kB' 'Mapped: 181644 kB' 'Shmem: 8384768 kB' 'KReclaimable: 204024 kB' 'Slab: 580588 kB' 'SReclaimable: 204024 kB' 'SUnreclaim: 376564 kB' 'KernelStack: 12944 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10024944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.431 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.432 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:02.433 nr_hugepages=1024 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:02.433 resv_hugepages=0 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:02.433 surplus_hugepages=0 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:02.433 anon_hugepages=0 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43765224 kB' 'MemAvailable: 47274516 kB' 'Buffers: 2704 kB' 'Cached: 12282992 kB' 'SwapCached: 0 kB' 'Active: 9301400 kB' 'Inactive: 3506552 kB' 'Active(anon): 8907048 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525428 kB' 'Mapped: 181644 kB' 'Shmem: 8384792 kB' 'KReclaimable: 204024 kB' 'Slab: 580588 kB' 'SReclaimable: 204024 kB' 'SUnreclaim: 376564 kB' 'KernelStack: 12928 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10024968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196692 kB' 'VmallocChunk: 0 kB' 'Percpu: 37056 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1912412 kB' 'DirectMap2M: 15833088 kB' 'DirectMap1G: 51380224 kB' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.433 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:02.434 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25899812 kB' 'MemUsed: 6930072 kB' 'SwapCached: 0 kB' 'Active: 3547792 kB' 'Inactive: 109764 kB' 'Active(anon): 3436904 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 109764 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3404332 kB' 'Mapped: 49484 kB' 'AnonPages: 256432 kB' 'Shmem: 3183680 kB' 'KernelStack: 6920 kB' 'PageTables: 4608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93796 kB' 'Slab: 315728 kB' 'SReclaimable: 93796 kB' 'SUnreclaim: 221932 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.435 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:02.436 node0=1024 expecting 1024 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:02.436 00:05:02.436 real 0m2.753s 00:05:02.436 user 0m1.169s 00:05:02.436 sys 0m1.502s 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.436 00:50:51 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:02.436 ************************************ 00:05:02.436 END TEST no_shrink_alloc 00:05:02.436 ************************************ 00:05:02.436 00:50:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:02.436 00:50:51 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:02.436 00:50:51 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:02.436 00:50:51 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:02.436 00:50:51 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.436 00:50:51 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:02.436 00:50:51 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.436 00:50:51 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:02.436 00:50:51 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:02.436 00:50:51 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.436 00:50:51 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:02.436 00:50:51 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:02.436 00:50:51 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:02.436 00:50:51 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:02.436 00:50:51 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:02.436 00:05:02.436 real 0m11.080s 00:05:02.436 user 0m4.287s 00:05:02.436 sys 0m5.683s 00:05:02.436 00:50:51 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.436 00:50:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:02.436 ************************************ 00:05:02.436 END TEST hugepages 00:05:02.436 ************************************ 00:05:02.436 00:50:51 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:02.436 00:50:51 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:02.436 00:50:51 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.436 00:50:51 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.436 00:50:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:02.436 ************************************ 00:05:02.436 START TEST driver 00:05:02.436 ************************************ 00:05:02.436 00:50:51 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:02.695 * Looking for test storage... 00:05:02.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:02.695 00:50:51 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:02.695 00:50:51 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:02.695 00:50:51 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:05.229 00:50:54 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:05.229 00:50:54 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.229 00:50:54 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.229 00:50:54 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:05.229 ************************************ 00:05:05.229 START TEST guess_driver 00:05:05.229 ************************************ 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:05.229 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:05.229 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:05.229 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:05.229 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:05.229 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:05.229 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:05.229 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:05.229 Looking for driver=vfio-pci 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.229 00:50:54 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:06.608 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.608 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.608 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.609 00:50:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:07.548 00:50:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:07.548 00:50:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:07.548 00:50:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:07.548 00:50:56 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:07.548 00:50:56 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:07.548 00:50:56 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.548 00:50:56 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:10.119 00:05:10.119 real 0m4.885s 00:05:10.119 user 0m1.110s 00:05:10.119 sys 0m1.892s 00:05:10.119 00:50:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.119 00:50:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:10.119 ************************************ 00:05:10.119 END TEST guess_driver 00:05:10.119 ************************************ 00:05:10.119 00:50:59 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:10.119 00:05:10.119 real 0m7.513s 00:05:10.119 user 0m1.672s 00:05:10.119 sys 0m2.971s 00:05:10.119 00:50:59 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.119 00:50:59 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:10.119 ************************************ 00:05:10.119 END TEST driver 00:05:10.119 ************************************ 00:05:10.119 00:50:59 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:10.119 00:50:59 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:10.119 00:50:59 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.119 00:50:59 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.119 00:50:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:10.119 ************************************ 00:05:10.119 START TEST devices 00:05:10.119 ************************************ 00:05:10.119 00:50:59 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:10.119 * Looking for test storage... 00:05:10.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:10.119 00:50:59 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:10.119 00:50:59 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:10.119 00:50:59 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:10.119 00:50:59 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:11.496 00:51:00 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:11.496 00:51:00 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:11.496 00:51:00 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:11.496 00:51:00 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:11.496 00:51:00 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:11.496 00:51:00 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:11.496 00:51:00 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:11.496 00:51:00 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:11.496 00:51:00 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:11.496 00:51:00 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:11.496 00:51:00 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:11.496 00:51:00 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:11.496 00:51:00 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:11.496 00:51:00 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:11.496 00:51:00 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:11.496 00:51:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:11.496 00:51:00 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:11.496 00:51:00 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:11.496 00:51:00 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:11.496 00:51:00 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:11.496 00:51:00 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:11.496 00:51:00 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:11.496 No valid GPT data, bailing 00:05:11.496 00:51:00 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:11.753 00:51:00 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:11.753 00:51:00 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:11.753 00:51:00 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:11.753 00:51:00 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:11.753 00:51:00 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:11.753 00:51:00 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:11.753 00:51:00 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:11.753 00:51:00 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:11.753 00:51:00 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:11.753 00:51:00 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:11.753 00:51:00 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:11.753 00:51:00 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:11.753 00:51:00 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.753 00:51:00 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.753 00:51:00 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:11.753 ************************************ 00:05:11.753 START TEST nvme_mount 00:05:11.753 ************************************ 00:05:11.753 00:51:00 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:11.753 00:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:11.754 00:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:11.754 00:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.754 00:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.754 00:51:00 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:11.754 00:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:11.754 00:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:11.754 00:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:11.754 00:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:11.754 00:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:11.754 00:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:11.754 00:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:11.754 00:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:11.754 00:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:11.754 00:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:11.754 00:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:11.754 00:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:11.754 00:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:11.754 00:51:00 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:12.691 Creating new GPT entries in memory. 00:05:12.691 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:12.691 other utilities. 00:05:12.691 00:51:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:12.692 00:51:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.692 00:51:01 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:12.692 00:51:01 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:12.692 00:51:01 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:13.631 Creating new GPT entries in memory. 00:05:13.631 The operation has completed successfully. 00:05:13.631 00:51:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:13.631 00:51:02 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:13.631 00:51:02 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 999779 00:05:13.631 00:51:02 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.631 00:51:02 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:13.631 00:51:02 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.631 00:51:02 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:13.631 00:51:02 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:13.631 00:51:03 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.890 00:51:03 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:13.890 00:51:03 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:13.890 00:51:03 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:13.890 00:51:03 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.890 00:51:03 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:13.890 00:51:03 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:13.890 00:51:03 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.890 00:51:03 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:13.890 00:51:03 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:13.890 00:51:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.890 00:51:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:13.890 00:51:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:13.890 00:51:03 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.890 00:51:03 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.828 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.829 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.829 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:14.829 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.829 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.829 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:14.829 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:14.829 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.829 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.829 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:14.829 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:15.089 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:15.089 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.089 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:15.350 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:15.350 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:15.350 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:15.350 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.350 00:51:04 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:16.290 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.549 00:51:05 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.925 00:51:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.925 00:51:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.925 00:51:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:17.925 00:51:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:17.925 00:51:07 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:17.925 00:51:07 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:17.925 00:51:07 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.925 00:51:07 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.925 00:51:07 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:17.925 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:17.925 00:05:17.925 real 0m6.205s 00:05:17.925 user 0m1.428s 00:05:17.925 sys 0m2.349s 00:05:17.925 00:51:07 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.925 00:51:07 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:17.925 ************************************ 00:05:17.925 END TEST nvme_mount 00:05:17.925 ************************************ 00:05:17.925 00:51:07 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:17.925 00:51:07 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:17.925 00:51:07 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.925 00:51:07 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.925 00:51:07 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:17.925 ************************************ 00:05:17.925 START TEST dm_mount 00:05:17.925 ************************************ 00:05:17.925 00:51:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:17.925 00:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:17.925 00:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:17.925 00:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:17.925 00:51:07 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:17.925 00:51:07 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:17.925 00:51:07 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:17.925 00:51:07 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:17.925 00:51:07 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:17.925 00:51:07 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:17.925 00:51:07 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:17.926 00:51:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:17.926 00:51:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.926 00:51:07 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.926 00:51:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:17.926 00:51:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.926 00:51:07 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.926 00:51:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:17.926 00:51:07 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.926 00:51:07 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:17.926 00:51:07 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:17.926 00:51:07 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:18.859 Creating new GPT entries in memory. 00:05:18.859 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:18.859 other utilities. 00:05:18.859 00:51:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:18.859 00:51:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.859 00:51:08 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:18.859 00:51:08 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:18.859 00:51:08 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:20.233 Creating new GPT entries in memory. 00:05:20.233 The operation has completed successfully. 00:05:20.233 00:51:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:20.233 00:51:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.233 00:51:09 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:20.233 00:51:09 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:20.233 00:51:09 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:21.170 The operation has completed successfully. 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1002440 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.171 00:51:10 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.105 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.364 00:51:11 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:23.299 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.557 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.557 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:23.557 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:23.557 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:23.557 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:23.557 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:23.557 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:23.557 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.557 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:23.557 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:23.557 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:23.557 00:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:23.557 00:05:23.557 real 0m5.742s 00:05:23.557 user 0m0.955s 00:05:23.557 sys 0m1.639s 00:05:23.557 00:51:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.557 00:51:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:23.557 ************************************ 00:05:23.557 END TEST dm_mount 00:05:23.557 ************************************ 00:05:23.557 00:51:12 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:23.557 00:51:12 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:23.557 00:51:12 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:23.557 00:51:12 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.557 00:51:12 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.557 00:51:12 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:23.557 00:51:12 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.557 00:51:12 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:23.816 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:23.816 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:23.816 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:23.816 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:23.816 00:51:13 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:23.816 00:51:13 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:24.074 00:51:13 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:24.074 00:51:13 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:24.074 00:51:13 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:24.074 00:51:13 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:24.074 00:51:13 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:24.074 00:05:24.074 real 0m13.848s 00:05:24.074 user 0m3.007s 00:05:24.074 sys 0m5.029s 00:05:24.074 00:51:13 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.074 00:51:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:24.074 ************************************ 00:05:24.074 END TEST devices 00:05:24.074 ************************************ 00:05:24.074 00:51:13 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:24.074 00:05:24.074 real 0m43.470s 00:05:24.074 user 0m12.455s 00:05:24.074 sys 0m19.199s 00:05:24.074 00:51:13 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.074 00:51:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:24.074 ************************************ 00:05:24.074 END TEST setup.sh 00:05:24.074 ************************************ 00:05:24.074 00:51:13 -- common/autotest_common.sh@1142 -- # return 0 00:05:24.074 00:51:13 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:25.022 Hugepages 00:05:25.022 node hugesize free / total 00:05:25.022 node0 1048576kB 0 / 0 00:05:25.022 node0 2048kB 2048 / 2048 00:05:25.022 node1 1048576kB 0 / 0 00:05:25.022 node1 2048kB 0 / 0 00:05:25.022 00:05:25.022 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:25.022 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:25.022 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:25.022 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:25.022 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:25.022 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:25.022 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:25.022 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:25.022 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:25.022 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:25.022 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:25.022 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:25.022 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:25.022 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:25.022 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:25.282 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:25.282 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:25.282 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:25.282 00:51:14 -- spdk/autotest.sh@130 -- # uname -s 00:05:25.282 00:51:14 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:25.282 00:51:14 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:25.282 00:51:14 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:26.660 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:26.660 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:26.660 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:26.660 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:26.660 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:26.660 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:26.660 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:26.660 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:26.660 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:26.660 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:26.660 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:26.660 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:26.660 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:26.660 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:26.660 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:26.660 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:27.631 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:27.631 00:51:16 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:28.568 00:51:17 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:28.568 00:51:17 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:28.568 00:51:17 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:28.568 00:51:17 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:28.568 00:51:17 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:28.568 00:51:17 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:28.568 00:51:17 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:28.568 00:51:17 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:28.568 00:51:17 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:28.568 00:51:17 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:28.568 00:51:17 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:28.568 00:51:17 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:29.949 Waiting for block devices as requested 00:05:29.949 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:29.949 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:29.949 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:29.949 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:30.208 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:30.208 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:30.208 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:30.208 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:30.467 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:30.467 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:30.467 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:30.467 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:30.726 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:30.726 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:30.726 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:30.984 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:30.984 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:30.984 00:51:20 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:30.984 00:51:20 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:30.984 00:51:20 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:30.984 00:51:20 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:05:30.984 00:51:20 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:30.984 00:51:20 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:30.984 00:51:20 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:30.984 00:51:20 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:30.984 00:51:20 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:30.984 00:51:20 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:30.984 00:51:20 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:30.984 00:51:20 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:30.984 00:51:20 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:30.984 00:51:20 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:30.984 00:51:20 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:30.984 00:51:20 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:30.984 00:51:20 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:30.984 00:51:20 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:30.984 00:51:20 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:30.984 00:51:20 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:30.984 00:51:20 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:30.984 00:51:20 -- common/autotest_common.sh@1557 -- # continue 00:05:30.984 00:51:20 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:30.984 00:51:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.984 00:51:20 -- common/autotest_common.sh@10 -- # set +x 00:05:30.984 00:51:20 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:30.984 00:51:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.984 00:51:20 -- common/autotest_common.sh@10 -- # set +x 00:05:31.242 00:51:20 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:32.177 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:32.177 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:32.177 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:32.177 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:32.177 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:32.177 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:32.177 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:32.177 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:32.177 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:32.177 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:32.177 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:32.436 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:32.436 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:32.436 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:32.436 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:32.436 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:33.371 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:33.371 00:51:22 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:33.371 00:51:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.371 00:51:22 -- common/autotest_common.sh@10 -- # set +x 00:05:33.371 00:51:22 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:33.371 00:51:22 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:33.371 00:51:22 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:33.371 00:51:22 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:33.371 00:51:22 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:33.371 00:51:22 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:33.371 00:51:22 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:33.371 00:51:22 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:33.371 00:51:22 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:33.371 00:51:22 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:33.371 00:51:22 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:33.371 00:51:22 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:33.371 00:51:22 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:33.371 00:51:22 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:33.371 00:51:22 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:33.371 00:51:22 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:33.371 00:51:22 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:33.371 00:51:22 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:33.371 00:51:22 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:05:33.371 00:51:22 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:05:33.371 00:51:22 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1007900 00:05:33.371 00:51:22 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.371 00:51:22 -- common/autotest_common.sh@1598 -- # waitforlisten 1007900 00:05:33.371 00:51:22 -- common/autotest_common.sh@829 -- # '[' -z 1007900 ']' 00:05:33.371 00:51:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.371 00:51:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.371 00:51:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.371 00:51:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.371 00:51:22 -- common/autotest_common.sh@10 -- # set +x 00:05:33.371 [2024-07-14 00:51:22.777826] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:33.371 [2024-07-14 00:51:22.777938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007900 ] 00:05:33.630 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.630 [2024-07-14 00:51:22.834325] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.630 [2024-07-14 00:51:22.921614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.888 00:51:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.888 00:51:23 -- common/autotest_common.sh@862 -- # return 0 00:05:33.888 00:51:23 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:33.888 00:51:23 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:33.888 00:51:23 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:37.169 nvme0n1 00:05:37.169 00:51:26 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:37.169 [2024-07-14 00:51:26.527361] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:37.169 [2024-07-14 00:51:26.527406] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:37.169 request: 00:05:37.169 { 00:05:37.169 "nvme_ctrlr_name": "nvme0", 00:05:37.169 "password": "test", 00:05:37.169 "method": "bdev_nvme_opal_revert", 00:05:37.169 "req_id": 1 00:05:37.169 } 00:05:37.169 Got JSON-RPC error response 00:05:37.169 response: 00:05:37.169 { 00:05:37.169 "code": -32603, 00:05:37.169 "message": "Internal error" 00:05:37.169 } 00:05:37.169 00:51:26 -- common/autotest_common.sh@1604 -- # true 00:05:37.169 00:51:26 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:37.169 00:51:26 -- common/autotest_common.sh@1608 -- # killprocess 1007900 00:05:37.169 00:51:26 -- common/autotest_common.sh@948 -- # '[' -z 1007900 ']' 00:05:37.169 00:51:26 -- common/autotest_common.sh@952 -- # kill -0 1007900 00:05:37.169 00:51:26 -- common/autotest_common.sh@953 -- # uname 00:05:37.169 00:51:26 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.169 00:51:26 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1007900 00:05:37.169 00:51:26 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.169 00:51:26 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.169 00:51:26 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1007900' 00:05:37.169 killing process with pid 1007900 00:05:37.169 00:51:26 -- common/autotest_common.sh@967 -- # kill 1007900 00:05:37.169 00:51:26 -- common/autotest_common.sh@972 -- # wait 1007900 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.427 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:37.428 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:39.328 00:51:28 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:39.328 00:51:28 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:39.328 00:51:28 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:39.328 00:51:28 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:39.328 00:51:28 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:39.328 00:51:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:39.328 00:51:28 -- common/autotest_common.sh@10 -- # set +x 00:05:39.328 00:51:28 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:39.328 00:51:28 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:39.328 00:51:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.328 00:51:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.328 00:51:28 -- common/autotest_common.sh@10 -- # set +x 00:05:39.328 ************************************ 00:05:39.328 START TEST env 00:05:39.328 ************************************ 00:05:39.328 00:51:28 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:39.328 * Looking for test storage... 00:05:39.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:39.328 00:51:28 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:39.328 00:51:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.328 00:51:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.328 00:51:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.328 ************************************ 00:05:39.328 START TEST env_memory 00:05:39.328 ************************************ 00:05:39.328 00:51:28 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:39.328 00:05:39.328 00:05:39.328 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.329 http://cunit.sourceforge.net/ 00:05:39.329 00:05:39.329 00:05:39.329 Suite: memory 00:05:39.329 Test: alloc and free memory map ...[2024-07-14 00:51:28.473811] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:39.329 passed 00:05:39.329 Test: mem map translation ...[2024-07-14 00:51:28.494373] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:39.329 [2024-07-14 00:51:28.494396] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:39.329 [2024-07-14 00:51:28.494446] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:39.329 [2024-07-14 00:51:28.494458] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:39.329 passed 00:05:39.329 Test: mem map registration ...[2024-07-14 00:51:28.535319] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:39.329 [2024-07-14 00:51:28.535339] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:39.329 passed 00:05:39.329 Test: mem map adjacent registrations ...passed 00:05:39.329 00:05:39.329 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.329 suites 1 1 n/a 0 0 00:05:39.329 tests 4 4 4 0 0 00:05:39.329 asserts 152 152 152 0 n/a 00:05:39.329 00:05:39.329 Elapsed time = 0.141 seconds 00:05:39.329 00:05:39.329 real 0m0.150s 00:05:39.329 user 0m0.144s 00:05:39.329 sys 0m0.006s 00:05:39.329 00:51:28 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.329 00:51:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:39.329 ************************************ 00:05:39.329 END TEST env_memory 00:05:39.329 ************************************ 00:05:39.329 00:51:28 env -- common/autotest_common.sh@1142 -- # return 0 00:05:39.329 00:51:28 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:39.329 00:51:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.329 00:51:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.329 00:51:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.329 ************************************ 00:05:39.329 START TEST env_vtophys 00:05:39.329 ************************************ 00:05:39.329 00:51:28 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:39.329 EAL: lib.eal log level changed from notice to debug 00:05:39.329 EAL: Detected lcore 0 as core 0 on socket 0 00:05:39.329 EAL: Detected lcore 1 as core 1 on socket 0 00:05:39.329 EAL: Detected lcore 2 as core 2 on socket 0 00:05:39.329 EAL: Detected lcore 3 as core 3 on socket 0 00:05:39.329 EAL: Detected lcore 4 as core 4 on socket 0 00:05:39.329 EAL: Detected lcore 5 as core 5 on socket 0 00:05:39.329 EAL: Detected lcore 6 as core 8 on socket 0 00:05:39.329 EAL: Detected lcore 7 as core 9 on socket 0 00:05:39.329 EAL: Detected lcore 8 as core 10 on socket 0 00:05:39.329 EAL: Detected lcore 9 as core 11 on socket 0 00:05:39.329 EAL: Detected lcore 10 as core 12 on socket 0 00:05:39.329 EAL: Detected lcore 11 as core 13 on socket 0 00:05:39.329 EAL: Detected lcore 12 as core 0 on socket 1 00:05:39.329 EAL: Detected lcore 13 as core 1 on socket 1 00:05:39.329 EAL: Detected lcore 14 as core 2 on socket 1 00:05:39.329 EAL: Detected lcore 15 as core 3 on socket 1 00:05:39.329 EAL: Detected lcore 16 as core 4 on socket 1 00:05:39.329 EAL: Detected lcore 17 as core 5 on socket 1 00:05:39.329 EAL: Detected lcore 18 as core 8 on socket 1 00:05:39.329 EAL: Detected lcore 19 as core 9 on socket 1 00:05:39.329 EAL: Detected lcore 20 as core 10 on socket 1 00:05:39.329 EAL: Detected lcore 21 as core 11 on socket 1 00:05:39.329 EAL: Detected lcore 22 as core 12 on socket 1 00:05:39.329 EAL: Detected lcore 23 as core 13 on socket 1 00:05:39.329 EAL: Detected lcore 24 as core 0 on socket 0 00:05:39.329 EAL: Detected lcore 25 as core 1 on socket 0 00:05:39.329 EAL: Detected lcore 26 as core 2 on socket 0 00:05:39.329 EAL: Detected lcore 27 as core 3 on socket 0 00:05:39.329 EAL: Detected lcore 28 as core 4 on socket 0 00:05:39.329 EAL: Detected lcore 29 as core 5 on socket 0 00:05:39.329 EAL: Detected lcore 30 as core 8 on socket 0 00:05:39.329 EAL: Detected lcore 31 as core 9 on socket 0 00:05:39.329 EAL: Detected lcore 32 as core 10 on socket 0 00:05:39.329 EAL: Detected lcore 33 as core 11 on socket 0 00:05:39.329 EAL: Detected lcore 34 as core 12 on socket 0 00:05:39.329 EAL: Detected lcore 35 as core 13 on socket 0 00:05:39.329 EAL: Detected lcore 36 as core 0 on socket 1 00:05:39.329 EAL: Detected lcore 37 as core 1 on socket 1 00:05:39.329 EAL: Detected lcore 38 as core 2 on socket 1 00:05:39.329 EAL: Detected lcore 39 as core 3 on socket 1 00:05:39.329 EAL: Detected lcore 40 as core 4 on socket 1 00:05:39.329 EAL: Detected lcore 41 as core 5 on socket 1 00:05:39.329 EAL: Detected lcore 42 as core 8 on socket 1 00:05:39.329 EAL: Detected lcore 43 as core 9 on socket 1 00:05:39.329 EAL: Detected lcore 44 as core 10 on socket 1 00:05:39.329 EAL: Detected lcore 45 as core 11 on socket 1 00:05:39.329 EAL: Detected lcore 46 as core 12 on socket 1 00:05:39.329 EAL: Detected lcore 47 as core 13 on socket 1 00:05:39.329 EAL: Maximum logical cores by configuration: 128 00:05:39.329 EAL: Detected CPU lcores: 48 00:05:39.329 EAL: Detected NUMA nodes: 2 00:05:39.329 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:39.329 EAL: Detected shared linkage of DPDK 00:05:39.329 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:39.329 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:39.329 EAL: Registered [vdev] bus. 00:05:39.329 EAL: bus.vdev log level changed from disabled to notice 00:05:39.329 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:39.329 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:39.329 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:39.329 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:39.329 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:39.329 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:39.329 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:39.329 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:39.329 EAL: No shared files mode enabled, IPC will be disabled 00:05:39.329 EAL: No shared files mode enabled, IPC is disabled 00:05:39.329 EAL: Bus pci wants IOVA as 'DC' 00:05:39.329 EAL: Bus vdev wants IOVA as 'DC' 00:05:39.329 EAL: Buses did not request a specific IOVA mode. 00:05:39.329 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:39.329 EAL: Selected IOVA mode 'VA' 00:05:39.329 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.329 EAL: Probing VFIO support... 00:05:39.329 EAL: IOMMU type 1 (Type 1) is supported 00:05:39.329 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:39.329 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:39.329 EAL: VFIO support initialized 00:05:39.329 EAL: Ask a virtual area of 0x2e000 bytes 00:05:39.329 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:39.329 EAL: Setting up physically contiguous memory... 00:05:39.329 EAL: Setting maximum number of open files to 524288 00:05:39.329 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:39.329 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:39.329 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:39.329 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.329 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:39.329 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.329 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.329 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:39.329 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:39.329 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.329 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:39.329 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.329 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.329 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:39.329 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:39.329 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.329 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:39.329 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.329 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.329 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:39.329 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:39.329 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.329 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:39.329 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.329 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.329 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:39.329 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:39.329 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:39.329 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.329 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:39.329 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:39.329 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.329 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:39.329 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:39.329 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.329 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:39.329 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:39.329 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.329 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:39.329 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:39.329 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.329 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:39.329 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:39.329 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.329 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:39.329 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:39.329 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.329 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:39.329 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:39.329 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.329 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:39.330 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:39.330 EAL: Hugepages will be freed exactly as allocated. 00:05:39.330 EAL: No shared files mode enabled, IPC is disabled 00:05:39.330 EAL: No shared files mode enabled, IPC is disabled 00:05:39.330 EAL: TSC frequency is ~2700000 KHz 00:05:39.330 EAL: Main lcore 0 is ready (tid=7f8405934a00;cpuset=[0]) 00:05:39.330 EAL: Trying to obtain current memory policy. 00:05:39.330 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.330 EAL: Restoring previous memory policy: 0 00:05:39.330 EAL: request: mp_malloc_sync 00:05:39.330 EAL: No shared files mode enabled, IPC is disabled 00:05:39.330 EAL: Heap on socket 0 was expanded by 2MB 00:05:39.330 EAL: No shared files mode enabled, IPC is disabled 00:05:39.330 EAL: No shared files mode enabled, IPC is disabled 00:05:39.330 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:39.330 EAL: Mem event callback 'spdk:(nil)' registered 00:05:39.330 00:05:39.330 00:05:39.330 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.330 http://cunit.sourceforge.net/ 00:05:39.330 00:05:39.330 00:05:39.330 Suite: components_suite 00:05:39.330 Test: vtophys_malloc_test ...passed 00:05:39.330 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:39.330 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.330 EAL: Restoring previous memory policy: 4 00:05:39.330 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.330 EAL: request: mp_malloc_sync 00:05:39.330 EAL: No shared files mode enabled, IPC is disabled 00:05:39.330 EAL: Heap on socket 0 was expanded by 4MB 00:05:39.330 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.330 EAL: request: mp_malloc_sync 00:05:39.330 EAL: No shared files mode enabled, IPC is disabled 00:05:39.330 EAL: Heap on socket 0 was shrunk by 4MB 00:05:39.330 EAL: Trying to obtain current memory policy. 00:05:39.330 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.330 EAL: Restoring previous memory policy: 4 00:05:39.330 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.330 EAL: request: mp_malloc_sync 00:05:39.330 EAL: No shared files mode enabled, IPC is disabled 00:05:39.330 EAL: Heap on socket 0 was expanded by 6MB 00:05:39.330 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.330 EAL: request: mp_malloc_sync 00:05:39.330 EAL: No shared files mode enabled, IPC is disabled 00:05:39.330 EAL: Heap on socket 0 was shrunk by 6MB 00:05:39.330 EAL: Trying to obtain current memory policy. 00:05:39.330 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.330 EAL: Restoring previous memory policy: 4 00:05:39.330 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.330 EAL: request: mp_malloc_sync 00:05:39.330 EAL: No shared files mode enabled, IPC is disabled 00:05:39.330 EAL: Heap on socket 0 was expanded by 10MB 00:05:39.330 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.330 EAL: request: mp_malloc_sync 00:05:39.330 EAL: No shared files mode enabled, IPC is disabled 00:05:39.330 EAL: Heap on socket 0 was shrunk by 10MB 00:05:39.330 EAL: Trying to obtain current memory policy. 00:05:39.330 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.330 EAL: Restoring previous memory policy: 4 00:05:39.330 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.330 EAL: request: mp_malloc_sync 00:05:39.330 EAL: No shared files mode enabled, IPC is disabled 00:05:39.330 EAL: Heap on socket 0 was expanded by 18MB 00:05:39.330 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.330 EAL: request: mp_malloc_sync 00:05:39.330 EAL: No shared files mode enabled, IPC is disabled 00:05:39.330 EAL: Heap on socket 0 was shrunk by 18MB 00:05:39.330 EAL: Trying to obtain current memory policy. 00:05:39.330 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.330 EAL: Restoring previous memory policy: 4 00:05:39.330 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.330 EAL: request: mp_malloc_sync 00:05:39.330 EAL: No shared files mode enabled, IPC is disabled 00:05:39.330 EAL: Heap on socket 0 was expanded by 34MB 00:05:39.330 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.330 EAL: request: mp_malloc_sync 00:05:39.330 EAL: No shared files mode enabled, IPC is disabled 00:05:39.330 EAL: Heap on socket 0 was shrunk by 34MB 00:05:39.330 EAL: Trying to obtain current memory policy. 00:05:39.330 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.588 EAL: Restoring previous memory policy: 4 00:05:39.588 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.588 EAL: request: mp_malloc_sync 00:05:39.588 EAL: No shared files mode enabled, IPC is disabled 00:05:39.588 EAL: Heap on socket 0 was expanded by 66MB 00:05:39.588 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.588 EAL: request: mp_malloc_sync 00:05:39.588 EAL: No shared files mode enabled, IPC is disabled 00:05:39.588 EAL: Heap on socket 0 was shrunk by 66MB 00:05:39.588 EAL: Trying to obtain current memory policy. 00:05:39.588 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.588 EAL: Restoring previous memory policy: 4 00:05:39.588 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.588 EAL: request: mp_malloc_sync 00:05:39.588 EAL: No shared files mode enabled, IPC is disabled 00:05:39.588 EAL: Heap on socket 0 was expanded by 130MB 00:05:39.588 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.588 EAL: request: mp_malloc_sync 00:05:39.588 EAL: No shared files mode enabled, IPC is disabled 00:05:39.588 EAL: Heap on socket 0 was shrunk by 130MB 00:05:39.588 EAL: Trying to obtain current memory policy. 00:05:39.588 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.588 EAL: Restoring previous memory policy: 4 00:05:39.588 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.588 EAL: request: mp_malloc_sync 00:05:39.588 EAL: No shared files mode enabled, IPC is disabled 00:05:39.588 EAL: Heap on socket 0 was expanded by 258MB 00:05:39.588 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.847 EAL: request: mp_malloc_sync 00:05:39.847 EAL: No shared files mode enabled, IPC is disabled 00:05:39.847 EAL: Heap on socket 0 was shrunk by 258MB 00:05:39.847 EAL: Trying to obtain current memory policy. 00:05:39.847 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.847 EAL: Restoring previous memory policy: 4 00:05:39.847 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.847 EAL: request: mp_malloc_sync 00:05:39.847 EAL: No shared files mode enabled, IPC is disabled 00:05:39.847 EAL: Heap on socket 0 was expanded by 514MB 00:05:40.105 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.105 EAL: request: mp_malloc_sync 00:05:40.105 EAL: No shared files mode enabled, IPC is disabled 00:05:40.105 EAL: Heap on socket 0 was shrunk by 514MB 00:05:40.105 EAL: Trying to obtain current memory policy. 00:05:40.105 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.364 EAL: Restoring previous memory policy: 4 00:05:40.364 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.364 EAL: request: mp_malloc_sync 00:05:40.364 EAL: No shared files mode enabled, IPC is disabled 00:05:40.364 EAL: Heap on socket 0 was expanded by 1026MB 00:05:40.623 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.882 EAL: request: mp_malloc_sync 00:05:40.882 EAL: No shared files mode enabled, IPC is disabled 00:05:40.882 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:40.882 passed 00:05:40.882 00:05:40.882 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.882 suites 1 1 n/a 0 0 00:05:40.882 tests 2 2 2 0 0 00:05:40.882 asserts 497 497 497 0 n/a 00:05:40.882 00:05:40.882 Elapsed time = 1.374 seconds 00:05:40.882 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.882 EAL: request: mp_malloc_sync 00:05:40.882 EAL: No shared files mode enabled, IPC is disabled 00:05:40.882 EAL: Heap on socket 0 was shrunk by 2MB 00:05:40.882 EAL: No shared files mode enabled, IPC is disabled 00:05:40.882 EAL: No shared files mode enabled, IPC is disabled 00:05:40.882 EAL: No shared files mode enabled, IPC is disabled 00:05:40.882 00:05:40.882 real 0m1.486s 00:05:40.882 user 0m0.859s 00:05:40.882 sys 0m0.595s 00:05:40.882 00:51:30 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.882 00:51:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:40.882 ************************************ 00:05:40.882 END TEST env_vtophys 00:05:40.882 ************************************ 00:05:40.882 00:51:30 env -- common/autotest_common.sh@1142 -- # return 0 00:05:40.882 00:51:30 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:40.882 00:51:30 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.882 00:51:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.882 00:51:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.882 ************************************ 00:05:40.882 START TEST env_pci 00:05:40.882 ************************************ 00:05:40.882 00:51:30 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:40.882 00:05:40.882 00:05:40.882 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.882 http://cunit.sourceforge.net/ 00:05:40.882 00:05:40.882 00:05:40.882 Suite: pci 00:05:40.882 Test: pci_hook ...[2024-07-14 00:51:30.172957] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1008788 has claimed it 00:05:40.882 EAL: Cannot find device (10000:00:01.0) 00:05:40.882 EAL: Failed to attach device on primary process 00:05:40.882 passed 00:05:40.882 00:05:40.882 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.882 suites 1 1 n/a 0 0 00:05:40.882 tests 1 1 1 0 0 00:05:40.882 asserts 25 25 25 0 n/a 00:05:40.882 00:05:40.882 Elapsed time = 0.021 seconds 00:05:40.882 00:05:40.882 real 0m0.032s 00:05:40.882 user 0m0.010s 00:05:40.882 sys 0m0.022s 00:05:40.882 00:51:30 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.882 00:51:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:40.882 ************************************ 00:05:40.882 END TEST env_pci 00:05:40.882 ************************************ 00:05:40.882 00:51:30 env -- common/autotest_common.sh@1142 -- # return 0 00:05:40.882 00:51:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:40.882 00:51:30 env -- env/env.sh@15 -- # uname 00:05:40.882 00:51:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:40.882 00:51:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:40.883 00:51:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.883 00:51:30 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:40.883 00:51:30 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.883 00:51:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:40.883 ************************************ 00:05:40.883 START TEST env_dpdk_post_init 00:05:40.883 ************************************ 00:05:40.883 00:51:30 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.883 EAL: Detected CPU lcores: 48 00:05:40.883 EAL: Detected NUMA nodes: 2 00:05:40.883 EAL: Detected shared linkage of DPDK 00:05:40.883 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.883 EAL: Selected IOVA mode 'VA' 00:05:40.883 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.883 EAL: VFIO support initialized 00:05:40.883 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:41.140 EAL: Using IOMMU type 1 (Type 1) 00:05:41.140 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:41.140 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:41.140 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:41.140 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:41.140 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:41.140 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:41.140 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:41.140 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:41.140 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:41.140 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:41.140 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:41.140 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:41.140 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:41.140 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:41.140 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:41.140 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:42.073 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:45.411 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:45.411 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:45.411 Starting DPDK initialization... 00:05:45.411 Starting SPDK post initialization... 00:05:45.411 SPDK NVMe probe 00:05:45.411 Attaching to 0000:88:00.0 00:05:45.411 Attached to 0000:88:00.0 00:05:45.411 Cleaning up... 00:05:45.411 00:05:45.411 real 0m4.430s 00:05:45.411 user 0m3.303s 00:05:45.411 sys 0m0.186s 00:05:45.411 00:51:34 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.411 00:51:34 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:45.411 ************************************ 00:05:45.411 END TEST env_dpdk_post_init 00:05:45.411 ************************************ 00:05:45.411 00:51:34 env -- common/autotest_common.sh@1142 -- # return 0 00:05:45.411 00:51:34 env -- env/env.sh@26 -- # uname 00:05:45.411 00:51:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:45.411 00:51:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:45.411 00:51:34 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.411 00:51:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.411 00:51:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.411 ************************************ 00:05:45.411 START TEST env_mem_callbacks 00:05:45.411 ************************************ 00:05:45.411 00:51:34 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:45.411 EAL: Detected CPU lcores: 48 00:05:45.411 EAL: Detected NUMA nodes: 2 00:05:45.411 EAL: Detected shared linkage of DPDK 00:05:45.411 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:45.411 EAL: Selected IOVA mode 'VA' 00:05:45.411 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.411 EAL: VFIO support initialized 00:05:45.411 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:45.411 00:05:45.411 00:05:45.411 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.411 http://cunit.sourceforge.net/ 00:05:45.411 00:05:45.411 00:05:45.411 Suite: memory 00:05:45.411 Test: test ... 00:05:45.411 register 0x200000200000 2097152 00:05:45.411 malloc 3145728 00:05:45.411 register 0x200000400000 4194304 00:05:45.411 buf 0x200000500000 len 3145728 PASSED 00:05:45.411 malloc 64 00:05:45.411 buf 0x2000004fff40 len 64 PASSED 00:05:45.411 malloc 4194304 00:05:45.411 register 0x200000800000 6291456 00:05:45.411 buf 0x200000a00000 len 4194304 PASSED 00:05:45.411 free 0x200000500000 3145728 00:05:45.411 free 0x2000004fff40 64 00:05:45.411 unregister 0x200000400000 4194304 PASSED 00:05:45.411 free 0x200000a00000 4194304 00:05:45.411 unregister 0x200000800000 6291456 PASSED 00:05:45.411 malloc 8388608 00:05:45.411 register 0x200000400000 10485760 00:05:45.411 buf 0x200000600000 len 8388608 PASSED 00:05:45.411 free 0x200000600000 8388608 00:05:45.411 unregister 0x200000400000 10485760 PASSED 00:05:45.411 passed 00:05:45.411 00:05:45.411 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.411 suites 1 1 n/a 0 0 00:05:45.411 tests 1 1 1 0 0 00:05:45.411 asserts 15 15 15 0 n/a 00:05:45.411 00:05:45.411 Elapsed time = 0.005 seconds 00:05:45.411 00:05:45.411 real 0m0.047s 00:05:45.411 user 0m0.012s 00:05:45.411 sys 0m0.035s 00:05:45.411 00:51:34 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.411 00:51:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:45.411 ************************************ 00:05:45.411 END TEST env_mem_callbacks 00:05:45.411 ************************************ 00:05:45.411 00:51:34 env -- common/autotest_common.sh@1142 -- # return 0 00:05:45.411 00:05:45.411 real 0m6.418s 00:05:45.411 user 0m4.446s 00:05:45.411 sys 0m1.015s 00:05:45.411 00:51:34 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.412 00:51:34 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.412 ************************************ 00:05:45.412 END TEST env 00:05:45.412 ************************************ 00:05:45.692 00:51:34 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.692 00:51:34 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:45.692 00:51:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.692 00:51:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.692 00:51:34 -- common/autotest_common.sh@10 -- # set +x 00:05:45.692 ************************************ 00:05:45.692 START TEST rpc 00:05:45.692 ************************************ 00:05:45.692 00:51:34 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:45.692 * Looking for test storage... 00:05:45.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:45.692 00:51:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1009519 00:05:45.692 00:51:34 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:45.692 00:51:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.692 00:51:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1009519 00:05:45.692 00:51:34 rpc -- common/autotest_common.sh@829 -- # '[' -z 1009519 ']' 00:05:45.692 00:51:34 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.692 00:51:34 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.692 00:51:34 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.692 00:51:34 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.692 00:51:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.692 [2024-07-14 00:51:34.928369] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:45.692 [2024-07-14 00:51:34.928464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009519 ] 00:05:45.692 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.692 [2024-07-14 00:51:34.989851] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.692 [2024-07-14 00:51:35.081616] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:45.692 [2024-07-14 00:51:35.081686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1009519' to capture a snapshot of events at runtime. 00:05:45.692 [2024-07-14 00:51:35.081699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:45.692 [2024-07-14 00:51:35.081711] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:45.692 [2024-07-14 00:51:35.081737] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1009519 for offline analysis/debug. 00:05:45.692 [2024-07-14 00:51:35.081768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.950 00:51:35 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.950 00:51:35 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:45.950 00:51:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:45.950 00:51:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:45.950 00:51:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:45.950 00:51:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:45.950 00:51:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.950 00:51:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.950 00:51:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.208 ************************************ 00:05:46.208 START TEST rpc_integrity 00:05:46.208 ************************************ 00:05:46.208 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:46.208 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.208 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.208 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.208 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.208 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.208 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:46.208 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.208 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.208 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.208 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.208 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.208 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:46.208 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.208 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.208 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.208 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.208 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.208 { 00:05:46.208 "name": "Malloc0", 00:05:46.208 "aliases": [ 00:05:46.208 "4cfb73cc-9ee4-499a-8f6a-4fd22bdba74a" 00:05:46.208 ], 00:05:46.208 "product_name": "Malloc disk", 00:05:46.208 "block_size": 512, 00:05:46.208 "num_blocks": 16384, 00:05:46.208 "uuid": "4cfb73cc-9ee4-499a-8f6a-4fd22bdba74a", 00:05:46.208 "assigned_rate_limits": { 00:05:46.208 "rw_ios_per_sec": 0, 00:05:46.208 "rw_mbytes_per_sec": 0, 00:05:46.208 "r_mbytes_per_sec": 0, 00:05:46.208 "w_mbytes_per_sec": 0 00:05:46.208 }, 00:05:46.208 "claimed": false, 00:05:46.208 "zoned": false, 00:05:46.208 "supported_io_types": { 00:05:46.208 "read": true, 00:05:46.208 "write": true, 00:05:46.208 "unmap": true, 00:05:46.208 "flush": true, 00:05:46.208 "reset": true, 00:05:46.208 "nvme_admin": false, 00:05:46.208 "nvme_io": false, 00:05:46.208 "nvme_io_md": false, 00:05:46.208 "write_zeroes": true, 00:05:46.208 "zcopy": true, 00:05:46.208 "get_zone_info": false, 00:05:46.208 "zone_management": false, 00:05:46.208 "zone_append": false, 00:05:46.208 "compare": false, 00:05:46.208 "compare_and_write": false, 00:05:46.208 "abort": true, 00:05:46.208 "seek_hole": false, 00:05:46.208 "seek_data": false, 00:05:46.208 "copy": true, 00:05:46.208 "nvme_iov_md": false 00:05:46.208 }, 00:05:46.208 "memory_domains": [ 00:05:46.208 { 00:05:46.208 "dma_device_id": "system", 00:05:46.208 "dma_device_type": 1 00:05:46.208 }, 00:05:46.208 { 00:05:46.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.208 "dma_device_type": 2 00:05:46.208 } 00:05:46.208 ], 00:05:46.208 "driver_specific": {} 00:05:46.208 } 00:05:46.208 ]' 00:05:46.208 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:46.208 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.208 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:46.209 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.209 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.209 [2024-07-14 00:51:35.474568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:46.209 [2024-07-14 00:51:35.474613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.209 [2024-07-14 00:51:35.474646] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e0caf0 00:05:46.209 [2024-07-14 00:51:35.474663] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.209 [2024-07-14 00:51:35.476316] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.209 [2024-07-14 00:51:35.476345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.209 Passthru0 00:05:46.209 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.209 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.209 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.209 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.209 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.209 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.209 { 00:05:46.209 "name": "Malloc0", 00:05:46.209 "aliases": [ 00:05:46.209 "4cfb73cc-9ee4-499a-8f6a-4fd22bdba74a" 00:05:46.209 ], 00:05:46.209 "product_name": "Malloc disk", 00:05:46.209 "block_size": 512, 00:05:46.209 "num_blocks": 16384, 00:05:46.209 "uuid": "4cfb73cc-9ee4-499a-8f6a-4fd22bdba74a", 00:05:46.209 "assigned_rate_limits": { 00:05:46.209 "rw_ios_per_sec": 0, 00:05:46.209 "rw_mbytes_per_sec": 0, 00:05:46.209 "r_mbytes_per_sec": 0, 00:05:46.209 "w_mbytes_per_sec": 0 00:05:46.209 }, 00:05:46.209 "claimed": true, 00:05:46.209 "claim_type": "exclusive_write", 00:05:46.209 "zoned": false, 00:05:46.209 "supported_io_types": { 00:05:46.209 "read": true, 00:05:46.209 "write": true, 00:05:46.209 "unmap": true, 00:05:46.209 "flush": true, 00:05:46.209 "reset": true, 00:05:46.209 "nvme_admin": false, 00:05:46.209 "nvme_io": false, 00:05:46.209 "nvme_io_md": false, 00:05:46.209 "write_zeroes": true, 00:05:46.209 "zcopy": true, 00:05:46.209 "get_zone_info": false, 00:05:46.209 "zone_management": false, 00:05:46.209 "zone_append": false, 00:05:46.209 "compare": false, 00:05:46.209 "compare_and_write": false, 00:05:46.209 "abort": true, 00:05:46.209 "seek_hole": false, 00:05:46.209 "seek_data": false, 00:05:46.209 "copy": true, 00:05:46.209 "nvme_iov_md": false 00:05:46.209 }, 00:05:46.209 "memory_domains": [ 00:05:46.209 { 00:05:46.209 "dma_device_id": "system", 00:05:46.209 "dma_device_type": 1 00:05:46.209 }, 00:05:46.209 { 00:05:46.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.209 "dma_device_type": 2 00:05:46.209 } 00:05:46.209 ], 00:05:46.209 "driver_specific": {} 00:05:46.209 }, 00:05:46.209 { 00:05:46.209 "name": "Passthru0", 00:05:46.209 "aliases": [ 00:05:46.209 "ae8d1e4d-6cce-5741-9f82-6ca34eadbd59" 00:05:46.209 ], 00:05:46.209 "product_name": "passthru", 00:05:46.209 "block_size": 512, 00:05:46.209 "num_blocks": 16384, 00:05:46.209 "uuid": "ae8d1e4d-6cce-5741-9f82-6ca34eadbd59", 00:05:46.209 "assigned_rate_limits": { 00:05:46.209 "rw_ios_per_sec": 0, 00:05:46.209 "rw_mbytes_per_sec": 0, 00:05:46.209 "r_mbytes_per_sec": 0, 00:05:46.209 "w_mbytes_per_sec": 0 00:05:46.209 }, 00:05:46.209 "claimed": false, 00:05:46.209 "zoned": false, 00:05:46.209 "supported_io_types": { 00:05:46.209 "read": true, 00:05:46.209 "write": true, 00:05:46.209 "unmap": true, 00:05:46.209 "flush": true, 00:05:46.209 "reset": true, 00:05:46.209 "nvme_admin": false, 00:05:46.209 "nvme_io": false, 00:05:46.209 "nvme_io_md": false, 00:05:46.209 "write_zeroes": true, 00:05:46.209 "zcopy": true, 00:05:46.209 "get_zone_info": false, 00:05:46.209 "zone_management": false, 00:05:46.209 "zone_append": false, 00:05:46.209 "compare": false, 00:05:46.209 "compare_and_write": false, 00:05:46.209 "abort": true, 00:05:46.209 "seek_hole": false, 00:05:46.209 "seek_data": false, 00:05:46.209 "copy": true, 00:05:46.209 "nvme_iov_md": false 00:05:46.209 }, 00:05:46.209 "memory_domains": [ 00:05:46.209 { 00:05:46.209 "dma_device_id": "system", 00:05:46.209 "dma_device_type": 1 00:05:46.209 }, 00:05:46.209 { 00:05:46.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.209 "dma_device_type": 2 00:05:46.209 } 00:05:46.209 ], 00:05:46.209 "driver_specific": { 00:05:46.209 "passthru": { 00:05:46.209 "name": "Passthru0", 00:05:46.209 "base_bdev_name": "Malloc0" 00:05:46.209 } 00:05:46.209 } 00:05:46.209 } 00:05:46.209 ]' 00:05:46.209 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:46.209 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.209 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.209 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.209 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.209 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.209 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:46.209 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.209 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.209 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.209 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.209 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.209 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.209 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.209 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.209 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:46.209 00:51:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.209 00:05:46.209 real 0m0.227s 00:05:46.209 user 0m0.153s 00:05:46.209 sys 0m0.019s 00:05:46.209 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.209 00:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.209 ************************************ 00:05:46.209 END TEST rpc_integrity 00:05:46.209 ************************************ 00:05:46.209 00:51:35 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.209 00:51:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:46.209 00:51:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.209 00:51:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.209 00:51:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.466 ************************************ 00:05:46.466 START TEST rpc_plugins 00:05:46.466 ************************************ 00:05:46.466 00:51:35 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:46.466 00:51:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:46.466 00:51:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.466 00:51:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.466 00:51:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.466 00:51:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:46.466 00:51:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:46.466 00:51:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.466 00:51:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.466 00:51:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.466 00:51:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:46.466 { 00:05:46.466 "name": "Malloc1", 00:05:46.466 "aliases": [ 00:05:46.466 "87377777-9783-4158-99eb-3852a78cd8ba" 00:05:46.466 ], 00:05:46.466 "product_name": "Malloc disk", 00:05:46.466 "block_size": 4096, 00:05:46.466 "num_blocks": 256, 00:05:46.466 "uuid": "87377777-9783-4158-99eb-3852a78cd8ba", 00:05:46.466 "assigned_rate_limits": { 00:05:46.466 "rw_ios_per_sec": 0, 00:05:46.466 "rw_mbytes_per_sec": 0, 00:05:46.466 "r_mbytes_per_sec": 0, 00:05:46.466 "w_mbytes_per_sec": 0 00:05:46.466 }, 00:05:46.466 "claimed": false, 00:05:46.466 "zoned": false, 00:05:46.466 "supported_io_types": { 00:05:46.466 "read": true, 00:05:46.466 "write": true, 00:05:46.466 "unmap": true, 00:05:46.466 "flush": true, 00:05:46.466 "reset": true, 00:05:46.466 "nvme_admin": false, 00:05:46.466 "nvme_io": false, 00:05:46.466 "nvme_io_md": false, 00:05:46.466 "write_zeroes": true, 00:05:46.466 "zcopy": true, 00:05:46.466 "get_zone_info": false, 00:05:46.466 "zone_management": false, 00:05:46.466 "zone_append": false, 00:05:46.466 "compare": false, 00:05:46.466 "compare_and_write": false, 00:05:46.466 "abort": true, 00:05:46.466 "seek_hole": false, 00:05:46.466 "seek_data": false, 00:05:46.466 "copy": true, 00:05:46.466 "nvme_iov_md": false 00:05:46.466 }, 00:05:46.466 "memory_domains": [ 00:05:46.466 { 00:05:46.466 "dma_device_id": "system", 00:05:46.466 "dma_device_type": 1 00:05:46.466 }, 00:05:46.466 { 00:05:46.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.466 "dma_device_type": 2 00:05:46.466 } 00:05:46.466 ], 00:05:46.466 "driver_specific": {} 00:05:46.466 } 00:05:46.466 ]' 00:05:46.466 00:51:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:46.466 00:51:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:46.466 00:51:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:46.466 00:51:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.466 00:51:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.466 00:51:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.466 00:51:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:46.466 00:51:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.466 00:51:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.466 00:51:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.466 00:51:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:46.466 00:51:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:46.466 00:51:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:46.466 00:05:46.466 real 0m0.114s 00:05:46.466 user 0m0.074s 00:05:46.466 sys 0m0.011s 00:05:46.466 00:51:35 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.466 00:51:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.466 ************************************ 00:05:46.466 END TEST rpc_plugins 00:05:46.466 ************************************ 00:05:46.466 00:51:35 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.466 00:51:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:46.466 00:51:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.466 00:51:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.466 00:51:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.466 ************************************ 00:05:46.466 START TEST rpc_trace_cmd_test 00:05:46.467 ************************************ 00:05:46.467 00:51:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:46.467 00:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:46.467 00:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:46.467 00:51:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.467 00:51:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.467 00:51:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.467 00:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:46.467 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1009519", 00:05:46.467 "tpoint_group_mask": "0x8", 00:05:46.467 "iscsi_conn": { 00:05:46.467 "mask": "0x2", 00:05:46.467 "tpoint_mask": "0x0" 00:05:46.467 }, 00:05:46.467 "scsi": { 00:05:46.467 "mask": "0x4", 00:05:46.467 "tpoint_mask": "0x0" 00:05:46.467 }, 00:05:46.467 "bdev": { 00:05:46.467 "mask": "0x8", 00:05:46.467 "tpoint_mask": "0xffffffffffffffff" 00:05:46.467 }, 00:05:46.467 "nvmf_rdma": { 00:05:46.467 "mask": "0x10", 00:05:46.467 "tpoint_mask": "0x0" 00:05:46.467 }, 00:05:46.467 "nvmf_tcp": { 00:05:46.467 "mask": "0x20", 00:05:46.467 "tpoint_mask": "0x0" 00:05:46.467 }, 00:05:46.467 "ftl": { 00:05:46.467 "mask": "0x40", 00:05:46.467 "tpoint_mask": "0x0" 00:05:46.467 }, 00:05:46.467 "blobfs": { 00:05:46.467 "mask": "0x80", 00:05:46.467 "tpoint_mask": "0x0" 00:05:46.467 }, 00:05:46.467 "dsa": { 00:05:46.467 "mask": "0x200", 00:05:46.467 "tpoint_mask": "0x0" 00:05:46.467 }, 00:05:46.467 "thread": { 00:05:46.467 "mask": "0x400", 00:05:46.467 "tpoint_mask": "0x0" 00:05:46.467 }, 00:05:46.467 "nvme_pcie": { 00:05:46.467 "mask": "0x800", 00:05:46.467 "tpoint_mask": "0x0" 00:05:46.467 }, 00:05:46.467 "iaa": { 00:05:46.467 "mask": "0x1000", 00:05:46.467 "tpoint_mask": "0x0" 00:05:46.467 }, 00:05:46.467 "nvme_tcp": { 00:05:46.467 "mask": "0x2000", 00:05:46.467 "tpoint_mask": "0x0" 00:05:46.467 }, 00:05:46.467 "bdev_nvme": { 00:05:46.467 "mask": "0x4000", 00:05:46.467 "tpoint_mask": "0x0" 00:05:46.467 }, 00:05:46.467 "sock": { 00:05:46.467 "mask": "0x8000", 00:05:46.467 "tpoint_mask": "0x0" 00:05:46.467 } 00:05:46.467 }' 00:05:46.467 00:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:46.467 00:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:46.467 00:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:46.726 00:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:46.726 00:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:46.726 00:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:46.726 00:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:46.726 00:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:46.726 00:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:46.726 00:51:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:46.726 00:05:46.726 real 0m0.199s 00:05:46.726 user 0m0.172s 00:05:46.726 sys 0m0.018s 00:05:46.726 00:51:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.726 00:51:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.726 ************************************ 00:05:46.726 END TEST rpc_trace_cmd_test 00:05:46.726 ************************************ 00:05:46.726 00:51:36 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.726 00:51:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:46.726 00:51:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:46.726 00:51:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:46.726 00:51:36 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.726 00:51:36 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.726 00:51:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.726 ************************************ 00:05:46.726 START TEST rpc_daemon_integrity 00:05:46.726 ************************************ 00:05:46.726 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:46.726 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.726 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.726 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.726 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.726 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.726 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:46.726 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.726 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.726 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.726 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.726 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.726 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:46.726 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.726 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.726 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.726 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.726 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.726 { 00:05:46.726 "name": "Malloc2", 00:05:46.726 "aliases": [ 00:05:46.726 "ea277bdf-fa85-460f-a380-b91abd13e029" 00:05:46.726 ], 00:05:46.726 "product_name": "Malloc disk", 00:05:46.726 "block_size": 512, 00:05:46.726 "num_blocks": 16384, 00:05:46.726 "uuid": "ea277bdf-fa85-460f-a380-b91abd13e029", 00:05:46.726 "assigned_rate_limits": { 00:05:46.726 "rw_ios_per_sec": 0, 00:05:46.726 "rw_mbytes_per_sec": 0, 00:05:46.726 "r_mbytes_per_sec": 0, 00:05:46.726 "w_mbytes_per_sec": 0 00:05:46.726 }, 00:05:46.726 "claimed": false, 00:05:46.726 "zoned": false, 00:05:46.726 "supported_io_types": { 00:05:46.726 "read": true, 00:05:46.726 "write": true, 00:05:46.727 "unmap": true, 00:05:46.727 "flush": true, 00:05:46.727 "reset": true, 00:05:46.727 "nvme_admin": false, 00:05:46.727 "nvme_io": false, 00:05:46.727 "nvme_io_md": false, 00:05:46.727 "write_zeroes": true, 00:05:46.727 "zcopy": true, 00:05:46.727 "get_zone_info": false, 00:05:46.727 "zone_management": false, 00:05:46.727 "zone_append": false, 00:05:46.727 "compare": false, 00:05:46.727 "compare_and_write": false, 00:05:46.727 "abort": true, 00:05:46.727 "seek_hole": false, 00:05:46.727 "seek_data": false, 00:05:46.727 "copy": true, 00:05:46.727 "nvme_iov_md": false 00:05:46.727 }, 00:05:46.727 "memory_domains": [ 00:05:46.727 { 00:05:46.727 "dma_device_id": "system", 00:05:46.727 "dma_device_type": 1 00:05:46.727 }, 00:05:46.727 { 00:05:46.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.727 "dma_device_type": 2 00:05:46.727 } 00:05:46.727 ], 00:05:46.727 "driver_specific": {} 00:05:46.727 } 00:05:46.727 ]' 00:05:46.727 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.987 [2024-07-14 00:51:36.152989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:46.987 [2024-07-14 00:51:36.153029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.987 [2024-07-14 00:51:36.153053] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c5c290 00:05:46.987 [2024-07-14 00:51:36.153067] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.987 [2024-07-14 00:51:36.154425] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.987 [2024-07-14 00:51:36.154453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.987 Passthru0 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.987 { 00:05:46.987 "name": "Malloc2", 00:05:46.987 "aliases": [ 00:05:46.987 "ea277bdf-fa85-460f-a380-b91abd13e029" 00:05:46.987 ], 00:05:46.987 "product_name": "Malloc disk", 00:05:46.987 "block_size": 512, 00:05:46.987 "num_blocks": 16384, 00:05:46.987 "uuid": "ea277bdf-fa85-460f-a380-b91abd13e029", 00:05:46.987 "assigned_rate_limits": { 00:05:46.987 "rw_ios_per_sec": 0, 00:05:46.987 "rw_mbytes_per_sec": 0, 00:05:46.987 "r_mbytes_per_sec": 0, 00:05:46.987 "w_mbytes_per_sec": 0 00:05:46.987 }, 00:05:46.987 "claimed": true, 00:05:46.987 "claim_type": "exclusive_write", 00:05:46.987 "zoned": false, 00:05:46.987 "supported_io_types": { 00:05:46.987 "read": true, 00:05:46.987 "write": true, 00:05:46.987 "unmap": true, 00:05:46.987 "flush": true, 00:05:46.987 "reset": true, 00:05:46.987 "nvme_admin": false, 00:05:46.987 "nvme_io": false, 00:05:46.987 "nvme_io_md": false, 00:05:46.987 "write_zeroes": true, 00:05:46.987 "zcopy": true, 00:05:46.987 "get_zone_info": false, 00:05:46.987 "zone_management": false, 00:05:46.987 "zone_append": false, 00:05:46.987 "compare": false, 00:05:46.987 "compare_and_write": false, 00:05:46.987 "abort": true, 00:05:46.987 "seek_hole": false, 00:05:46.987 "seek_data": false, 00:05:46.987 "copy": true, 00:05:46.987 "nvme_iov_md": false 00:05:46.987 }, 00:05:46.987 "memory_domains": [ 00:05:46.987 { 00:05:46.987 "dma_device_id": "system", 00:05:46.987 "dma_device_type": 1 00:05:46.987 }, 00:05:46.987 { 00:05:46.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.987 "dma_device_type": 2 00:05:46.987 } 00:05:46.987 ], 00:05:46.987 "driver_specific": {} 00:05:46.987 }, 00:05:46.987 { 00:05:46.987 "name": "Passthru0", 00:05:46.987 "aliases": [ 00:05:46.987 "1ba0c578-fc0d-506e-af9d-c67f7defe2a8" 00:05:46.987 ], 00:05:46.987 "product_name": "passthru", 00:05:46.987 "block_size": 512, 00:05:46.987 "num_blocks": 16384, 00:05:46.987 "uuid": "1ba0c578-fc0d-506e-af9d-c67f7defe2a8", 00:05:46.987 "assigned_rate_limits": { 00:05:46.987 "rw_ios_per_sec": 0, 00:05:46.987 "rw_mbytes_per_sec": 0, 00:05:46.987 "r_mbytes_per_sec": 0, 00:05:46.987 "w_mbytes_per_sec": 0 00:05:46.987 }, 00:05:46.987 "claimed": false, 00:05:46.987 "zoned": false, 00:05:46.987 "supported_io_types": { 00:05:46.987 "read": true, 00:05:46.987 "write": true, 00:05:46.987 "unmap": true, 00:05:46.987 "flush": true, 00:05:46.987 "reset": true, 00:05:46.987 "nvme_admin": false, 00:05:46.987 "nvme_io": false, 00:05:46.987 "nvme_io_md": false, 00:05:46.987 "write_zeroes": true, 00:05:46.987 "zcopy": true, 00:05:46.987 "get_zone_info": false, 00:05:46.987 "zone_management": false, 00:05:46.987 "zone_append": false, 00:05:46.987 "compare": false, 00:05:46.987 "compare_and_write": false, 00:05:46.987 "abort": true, 00:05:46.987 "seek_hole": false, 00:05:46.987 "seek_data": false, 00:05:46.987 "copy": true, 00:05:46.987 "nvme_iov_md": false 00:05:46.987 }, 00:05:46.987 "memory_domains": [ 00:05:46.987 { 00:05:46.987 "dma_device_id": "system", 00:05:46.987 "dma_device_type": 1 00:05:46.987 }, 00:05:46.987 { 00:05:46.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.987 "dma_device_type": 2 00:05:46.987 } 00:05:46.987 ], 00:05:46.987 "driver_specific": { 00:05:46.987 "passthru": { 00:05:46.987 "name": "Passthru0", 00:05:46.987 "base_bdev_name": "Malloc2" 00:05:46.987 } 00:05:46.987 } 00:05:46.987 } 00:05:46.987 ]' 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.987 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.988 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.988 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:46.988 00:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.988 00:05:46.988 real 0m0.222s 00:05:46.988 user 0m0.143s 00:05:46.988 sys 0m0.020s 00:05:46.988 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.988 00:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.988 ************************************ 00:05:46.988 END TEST rpc_daemon_integrity 00:05:46.988 ************************************ 00:05:46.988 00:51:36 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.988 00:51:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:46.988 00:51:36 rpc -- rpc/rpc.sh@84 -- # killprocess 1009519 00:05:46.988 00:51:36 rpc -- common/autotest_common.sh@948 -- # '[' -z 1009519 ']' 00:05:46.988 00:51:36 rpc -- common/autotest_common.sh@952 -- # kill -0 1009519 00:05:46.988 00:51:36 rpc -- common/autotest_common.sh@953 -- # uname 00:05:46.988 00:51:36 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.988 00:51:36 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1009519 00:05:46.988 00:51:36 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.988 00:51:36 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.988 00:51:36 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1009519' 00:05:46.988 killing process with pid 1009519 00:05:46.988 00:51:36 rpc -- common/autotest_common.sh@967 -- # kill 1009519 00:05:46.988 00:51:36 rpc -- common/autotest_common.sh@972 -- # wait 1009519 00:05:47.557 00:05:47.557 real 0m1.906s 00:05:47.557 user 0m2.404s 00:05:47.557 sys 0m0.606s 00:05:47.557 00:51:36 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.557 00:51:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.557 ************************************ 00:05:47.557 END TEST rpc 00:05:47.557 ************************************ 00:05:47.557 00:51:36 -- common/autotest_common.sh@1142 -- # return 0 00:05:47.557 00:51:36 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:47.557 00:51:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.557 00:51:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.557 00:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:47.557 ************************************ 00:05:47.557 START TEST skip_rpc 00:05:47.557 ************************************ 00:05:47.557 00:51:36 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:47.557 * Looking for test storage... 00:05:47.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:47.557 00:51:36 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:47.557 00:51:36 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:47.557 00:51:36 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:47.557 00:51:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.557 00:51:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.557 00:51:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.557 ************************************ 00:05:47.557 START TEST skip_rpc 00:05:47.557 ************************************ 00:05:47.557 00:51:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:47.557 00:51:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1009882 00:05:47.557 00:51:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:47.557 00:51:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.557 00:51:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:47.557 [2024-07-14 00:51:36.906019] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:47.557 [2024-07-14 00:51:36.906085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009882 ] 00:05:47.557 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.557 [2024-07-14 00:51:36.961832] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.817 [2024-07-14 00:51:37.051493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1009882 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1009882 ']' 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1009882 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1009882 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1009882' 00:05:53.090 killing process with pid 1009882 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1009882 00:05:53.090 00:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1009882 00:05:53.090 00:05:53.090 real 0m5.458s 00:05:53.090 user 0m5.145s 00:05:53.090 sys 0m0.317s 00:05:53.090 00:51:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.090 00:51:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.090 ************************************ 00:05:53.090 END TEST skip_rpc 00:05:53.090 ************************************ 00:05:53.090 00:51:42 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:53.090 00:51:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:53.090 00:51:42 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.090 00:51:42 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.090 00:51:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.090 ************************************ 00:05:53.090 START TEST skip_rpc_with_json 00:05:53.090 ************************************ 00:05:53.090 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:53.090 00:51:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:53.090 00:51:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1010571 00:05:53.090 00:51:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.090 00:51:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.090 00:51:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1010571 00:05:53.090 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1010571 ']' 00:05:53.090 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.090 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.090 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.090 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.090 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.090 [2024-07-14 00:51:42.412936] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:53.090 [2024-07-14 00:51:42.413035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010571 ] 00:05:53.090 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.090 [2024-07-14 00:51:42.470491] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.349 [2024-07-14 00:51:42.555396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.611 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.611 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:53.611 00:51:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:53.611 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.611 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.611 [2024-07-14 00:51:42.817953] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:53.611 request: 00:05:53.611 { 00:05:53.611 "trtype": "tcp", 00:05:53.611 "method": "nvmf_get_transports", 00:05:53.611 "req_id": 1 00:05:53.611 } 00:05:53.611 Got JSON-RPC error response 00:05:53.611 response: 00:05:53.611 { 00:05:53.611 "code": -19, 00:05:53.611 "message": "No such device" 00:05:53.611 } 00:05:53.611 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:53.611 00:51:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:53.611 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.611 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.611 [2024-07-14 00:51:42.826061] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.611 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.611 00:51:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:53.611 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.611 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.611 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.611 00:51:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:53.611 { 00:05:53.611 "subsystems": [ 00:05:53.611 { 00:05:53.611 "subsystem": "vfio_user_target", 00:05:53.611 "config": null 00:05:53.611 }, 00:05:53.611 { 00:05:53.611 "subsystem": "keyring", 00:05:53.611 "config": [] 00:05:53.611 }, 00:05:53.611 { 00:05:53.611 "subsystem": "iobuf", 00:05:53.611 "config": [ 00:05:53.611 { 00:05:53.611 "method": "iobuf_set_options", 00:05:53.611 "params": { 00:05:53.611 "small_pool_count": 8192, 00:05:53.611 "large_pool_count": 1024, 00:05:53.611 "small_bufsize": 8192, 00:05:53.611 "large_bufsize": 135168 00:05:53.611 } 00:05:53.611 } 00:05:53.611 ] 00:05:53.611 }, 00:05:53.611 { 00:05:53.611 "subsystem": "sock", 00:05:53.611 "config": [ 00:05:53.611 { 00:05:53.611 "method": "sock_set_default_impl", 00:05:53.611 "params": { 00:05:53.611 "impl_name": "posix" 00:05:53.611 } 00:05:53.611 }, 00:05:53.611 { 00:05:53.611 "method": "sock_impl_set_options", 00:05:53.611 "params": { 00:05:53.611 "impl_name": "ssl", 00:05:53.611 "recv_buf_size": 4096, 00:05:53.611 "send_buf_size": 4096, 00:05:53.611 "enable_recv_pipe": true, 00:05:53.611 "enable_quickack": false, 00:05:53.611 "enable_placement_id": 0, 00:05:53.611 "enable_zerocopy_send_server": true, 00:05:53.611 "enable_zerocopy_send_client": false, 00:05:53.611 "zerocopy_threshold": 0, 00:05:53.611 "tls_version": 0, 00:05:53.611 "enable_ktls": false 00:05:53.611 } 00:05:53.611 }, 00:05:53.611 { 00:05:53.611 "method": "sock_impl_set_options", 00:05:53.611 "params": { 00:05:53.611 "impl_name": "posix", 00:05:53.611 "recv_buf_size": 2097152, 00:05:53.611 "send_buf_size": 2097152, 00:05:53.611 "enable_recv_pipe": true, 00:05:53.611 "enable_quickack": false, 00:05:53.611 "enable_placement_id": 0, 00:05:53.611 "enable_zerocopy_send_server": true, 00:05:53.611 "enable_zerocopy_send_client": false, 00:05:53.611 "zerocopy_threshold": 0, 00:05:53.611 "tls_version": 0, 00:05:53.611 "enable_ktls": false 00:05:53.611 } 00:05:53.611 } 00:05:53.611 ] 00:05:53.611 }, 00:05:53.611 { 00:05:53.611 "subsystem": "vmd", 00:05:53.611 "config": [] 00:05:53.611 }, 00:05:53.611 { 00:05:53.611 "subsystem": "accel", 00:05:53.611 "config": [ 00:05:53.611 { 00:05:53.611 "method": "accel_set_options", 00:05:53.611 "params": { 00:05:53.611 "small_cache_size": 128, 00:05:53.611 "large_cache_size": 16, 00:05:53.611 "task_count": 2048, 00:05:53.611 "sequence_count": 2048, 00:05:53.611 "buf_count": 2048 00:05:53.611 } 00:05:53.611 } 00:05:53.611 ] 00:05:53.611 }, 00:05:53.611 { 00:05:53.611 "subsystem": "bdev", 00:05:53.611 "config": [ 00:05:53.611 { 00:05:53.611 "method": "bdev_set_options", 00:05:53.611 "params": { 00:05:53.611 "bdev_io_pool_size": 65535, 00:05:53.611 "bdev_io_cache_size": 256, 00:05:53.611 "bdev_auto_examine": true, 00:05:53.611 "iobuf_small_cache_size": 128, 00:05:53.611 "iobuf_large_cache_size": 16 00:05:53.611 } 00:05:53.611 }, 00:05:53.611 { 00:05:53.611 "method": "bdev_raid_set_options", 00:05:53.611 "params": { 00:05:53.611 "process_window_size_kb": 1024 00:05:53.611 } 00:05:53.611 }, 00:05:53.611 { 00:05:53.611 "method": "bdev_iscsi_set_options", 00:05:53.611 "params": { 00:05:53.611 "timeout_sec": 30 00:05:53.611 } 00:05:53.611 }, 00:05:53.611 { 00:05:53.611 "method": "bdev_nvme_set_options", 00:05:53.611 "params": { 00:05:53.611 "action_on_timeout": "none", 00:05:53.611 "timeout_us": 0, 00:05:53.611 "timeout_admin_us": 0, 00:05:53.611 "keep_alive_timeout_ms": 10000, 00:05:53.611 "arbitration_burst": 0, 00:05:53.611 "low_priority_weight": 0, 00:05:53.611 "medium_priority_weight": 0, 00:05:53.611 "high_priority_weight": 0, 00:05:53.611 "nvme_adminq_poll_period_us": 10000, 00:05:53.611 "nvme_ioq_poll_period_us": 0, 00:05:53.611 "io_queue_requests": 0, 00:05:53.611 "delay_cmd_submit": true, 00:05:53.611 "transport_retry_count": 4, 00:05:53.611 "bdev_retry_count": 3, 00:05:53.611 "transport_ack_timeout": 0, 00:05:53.611 "ctrlr_loss_timeout_sec": 0, 00:05:53.611 "reconnect_delay_sec": 0, 00:05:53.611 "fast_io_fail_timeout_sec": 0, 00:05:53.612 "disable_auto_failback": false, 00:05:53.612 "generate_uuids": false, 00:05:53.612 "transport_tos": 0, 00:05:53.612 "nvme_error_stat": false, 00:05:53.612 "rdma_srq_size": 0, 00:05:53.612 "io_path_stat": false, 00:05:53.612 "allow_accel_sequence": false, 00:05:53.612 "rdma_max_cq_size": 0, 00:05:53.612 "rdma_cm_event_timeout_ms": 0, 00:05:53.612 "dhchap_digests": [ 00:05:53.612 "sha256", 00:05:53.612 "sha384", 00:05:53.612 "sha512" 00:05:53.612 ], 00:05:53.612 "dhchap_dhgroups": [ 00:05:53.612 "null", 00:05:53.612 "ffdhe2048", 00:05:53.612 "ffdhe3072", 00:05:53.612 "ffdhe4096", 00:05:53.612 "ffdhe6144", 00:05:53.612 "ffdhe8192" 00:05:53.612 ] 00:05:53.612 } 00:05:53.612 }, 00:05:53.612 { 00:05:53.612 "method": "bdev_nvme_set_hotplug", 00:05:53.612 "params": { 00:05:53.612 "period_us": 100000, 00:05:53.612 "enable": false 00:05:53.612 } 00:05:53.612 }, 00:05:53.612 { 00:05:53.612 "method": "bdev_wait_for_examine" 00:05:53.612 } 00:05:53.612 ] 00:05:53.612 }, 00:05:53.612 { 00:05:53.612 "subsystem": "scsi", 00:05:53.612 "config": null 00:05:53.612 }, 00:05:53.612 { 00:05:53.612 "subsystem": "scheduler", 00:05:53.612 "config": [ 00:05:53.612 { 00:05:53.612 "method": "framework_set_scheduler", 00:05:53.612 "params": { 00:05:53.612 "name": "static" 00:05:53.612 } 00:05:53.612 } 00:05:53.612 ] 00:05:53.612 }, 00:05:53.612 { 00:05:53.612 "subsystem": "vhost_scsi", 00:05:53.612 "config": [] 00:05:53.612 }, 00:05:53.612 { 00:05:53.612 "subsystem": "vhost_blk", 00:05:53.612 "config": [] 00:05:53.612 }, 00:05:53.612 { 00:05:53.612 "subsystem": "ublk", 00:05:53.612 "config": [] 00:05:53.612 }, 00:05:53.612 { 00:05:53.612 "subsystem": "nbd", 00:05:53.612 "config": [] 00:05:53.612 }, 00:05:53.612 { 00:05:53.612 "subsystem": "nvmf", 00:05:53.612 "config": [ 00:05:53.612 { 00:05:53.612 "method": "nvmf_set_config", 00:05:53.612 "params": { 00:05:53.612 "discovery_filter": "match_any", 00:05:53.612 "admin_cmd_passthru": { 00:05:53.612 "identify_ctrlr": false 00:05:53.612 } 00:05:53.612 } 00:05:53.612 }, 00:05:53.612 { 00:05:53.612 "method": "nvmf_set_max_subsystems", 00:05:53.612 "params": { 00:05:53.612 "max_subsystems": 1024 00:05:53.612 } 00:05:53.612 }, 00:05:53.612 { 00:05:53.612 "method": "nvmf_set_crdt", 00:05:53.612 "params": { 00:05:53.612 "crdt1": 0, 00:05:53.612 "crdt2": 0, 00:05:53.612 "crdt3": 0 00:05:53.612 } 00:05:53.612 }, 00:05:53.612 { 00:05:53.612 "method": "nvmf_create_transport", 00:05:53.612 "params": { 00:05:53.612 "trtype": "TCP", 00:05:53.612 "max_queue_depth": 128, 00:05:53.612 "max_io_qpairs_per_ctrlr": 127, 00:05:53.612 "in_capsule_data_size": 4096, 00:05:53.612 "max_io_size": 131072, 00:05:53.612 "io_unit_size": 131072, 00:05:53.612 "max_aq_depth": 128, 00:05:53.612 "num_shared_buffers": 511, 00:05:53.612 "buf_cache_size": 4294967295, 00:05:53.612 "dif_insert_or_strip": false, 00:05:53.612 "zcopy": false, 00:05:53.612 "c2h_success": true, 00:05:53.612 "sock_priority": 0, 00:05:53.612 "abort_timeout_sec": 1, 00:05:53.612 "ack_timeout": 0, 00:05:53.612 "data_wr_pool_size": 0 00:05:53.612 } 00:05:53.612 } 00:05:53.612 ] 00:05:53.612 }, 00:05:53.612 { 00:05:53.612 "subsystem": "iscsi", 00:05:53.612 "config": [ 00:05:53.612 { 00:05:53.612 "method": "iscsi_set_options", 00:05:53.612 "params": { 00:05:53.612 "node_base": "iqn.2016-06.io.spdk", 00:05:53.612 "max_sessions": 128, 00:05:53.612 "max_connections_per_session": 2, 00:05:53.612 "max_queue_depth": 64, 00:05:53.612 "default_time2wait": 2, 00:05:53.612 "default_time2retain": 20, 00:05:53.612 "first_burst_length": 8192, 00:05:53.612 "immediate_data": true, 00:05:53.612 "allow_duplicated_isid": false, 00:05:53.612 "error_recovery_level": 0, 00:05:53.612 "nop_timeout": 60, 00:05:53.612 "nop_in_interval": 30, 00:05:53.612 "disable_chap": false, 00:05:53.612 "require_chap": false, 00:05:53.612 "mutual_chap": false, 00:05:53.612 "chap_group": 0, 00:05:53.612 "max_large_datain_per_connection": 64, 00:05:53.612 "max_r2t_per_connection": 4, 00:05:53.612 "pdu_pool_size": 36864, 00:05:53.612 "immediate_data_pool_size": 16384, 00:05:53.612 "data_out_pool_size": 2048 00:05:53.612 } 00:05:53.612 } 00:05:53.612 ] 00:05:53.612 } 00:05:53.612 ] 00:05:53.612 } 00:05:53.612 00:51:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:53.612 00:51:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1010571 00:05:53.612 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1010571 ']' 00:05:53.612 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1010571 00:05:53.612 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:53.612 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.612 00:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1010571 00:05:53.612 00:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.612 00:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.612 00:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1010571' 00:05:53.612 killing process with pid 1010571 00:05:53.612 00:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1010571 00:05:53.612 00:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1010571 00:05:54.180 00:51:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1010711 00:05:54.180 00:51:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:54.180 00:51:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:59.450 00:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1010711 00:05:59.450 00:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1010711 ']' 00:05:59.450 00:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1010711 00:05:59.450 00:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:59.450 00:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.450 00:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1010711 00:05:59.450 00:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.450 00:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.450 00:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1010711' 00:05:59.450 killing process with pid 1010711 00:05:59.450 00:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1010711 00:05:59.450 00:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1010711 00:05:59.450 00:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:59.450 00:51:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:59.450 00:05:59.450 real 0m6.494s 00:05:59.450 user 0m6.091s 00:05:59.450 sys 0m0.677s 00:05:59.450 00:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.450 00:51:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.450 ************************************ 00:05:59.450 END TEST skip_rpc_with_json 00:05:59.450 ************************************ 00:05:59.709 00:51:48 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:59.709 00:51:48 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:59.709 00:51:48 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.709 00:51:48 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.709 00:51:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.709 ************************************ 00:05:59.709 START TEST skip_rpc_with_delay 00:05:59.709 ************************************ 00:05:59.709 00:51:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:59.709 00:51:48 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.710 00:51:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:59.710 00:51:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.710 00:51:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.710 00:51:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.710 00:51:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.710 00:51:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.710 00:51:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.710 00:51:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.710 00:51:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.710 00:51:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:59.710 00:51:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.710 [2024-07-14 00:51:48.952807] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:59.710 [2024-07-14 00:51:48.952934] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:59.710 00:51:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:59.710 00:51:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:59.710 00:51:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:59.710 00:51:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:59.710 00:05:59.710 real 0m0.066s 00:05:59.710 user 0m0.039s 00:05:59.710 sys 0m0.027s 00:05:59.710 00:51:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.710 00:51:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:59.710 ************************************ 00:05:59.710 END TEST skip_rpc_with_delay 00:05:59.710 ************************************ 00:05:59.710 00:51:48 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:59.710 00:51:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:59.710 00:51:48 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:59.710 00:51:48 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:59.710 00:51:48 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.710 00:51:48 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.710 00:51:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.710 ************************************ 00:05:59.710 START TEST exit_on_failed_rpc_init 00:05:59.710 ************************************ 00:05:59.710 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:59.710 00:51:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1011430 00:05:59.710 00:51:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.710 00:51:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1011430 00:05:59.710 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1011430 ']' 00:05:59.710 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.710 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.710 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.710 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.710 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:59.710 [2024-07-14 00:51:49.060951] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:59.710 [2024-07-14 00:51:49.061045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011430 ] 00:05:59.710 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.710 [2024-07-14 00:51:49.116827] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.968 [2024-07-14 00:51:49.204583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.227 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.227 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:00.227 00:51:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.227 00:51:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.227 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:00.227 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.227 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.227 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.227 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.227 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.227 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.227 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.227 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.227 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:00.227 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.227 [2024-07-14 00:51:49.511432] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:00.227 [2024-07-14 00:51:49.511526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011509 ] 00:06:00.227 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.227 [2024-07-14 00:51:49.574270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.486 [2024-07-14 00:51:49.671015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.486 [2024-07-14 00:51:49.671125] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:00.486 [2024-07-14 00:51:49.671158] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:00.486 [2024-07-14 00:51:49.671173] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:00.486 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:00.486 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.486 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:00.486 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:00.486 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:00.486 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.486 00:51:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:00.486 00:51:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1011430 00:06:00.486 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1011430 ']' 00:06:00.486 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1011430 00:06:00.486 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:00.486 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.486 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1011430 00:06:00.486 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.486 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.486 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1011430' 00:06:00.486 killing process with pid 1011430 00:06:00.486 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1011430 00:06:00.486 00:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1011430 00:06:01.052 00:06:01.052 real 0m1.194s 00:06:01.052 user 0m1.290s 00:06:01.052 sys 0m0.464s 00:06:01.052 00:51:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.052 00:51:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.052 ************************************ 00:06:01.052 END TEST exit_on_failed_rpc_init 00:06:01.052 ************************************ 00:06:01.052 00:51:50 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:01.052 00:51:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:01.052 00:06:01.052 real 0m13.453s 00:06:01.052 user 0m12.652s 00:06:01.052 sys 0m1.655s 00:06:01.052 00:51:50 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.052 00:51:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.052 ************************************ 00:06:01.052 END TEST skip_rpc 00:06:01.052 ************************************ 00:06:01.052 00:51:50 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.052 00:51:50 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:01.052 00:51:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.052 00:51:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.052 00:51:50 -- common/autotest_common.sh@10 -- # set +x 00:06:01.052 ************************************ 00:06:01.052 START TEST rpc_client 00:06:01.052 ************************************ 00:06:01.052 00:51:50 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:01.052 * Looking for test storage... 00:06:01.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:01.052 00:51:50 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:01.052 OK 00:06:01.052 00:51:50 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:01.052 00:06:01.052 real 0m0.066s 00:06:01.052 user 0m0.026s 00:06:01.052 sys 0m0.045s 00:06:01.052 00:51:50 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.052 00:51:50 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:01.052 ************************************ 00:06:01.052 END TEST rpc_client 00:06:01.052 ************************************ 00:06:01.052 00:51:50 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.052 00:51:50 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:01.052 00:51:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.052 00:51:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.052 00:51:50 -- common/autotest_common.sh@10 -- # set +x 00:06:01.052 ************************************ 00:06:01.052 START TEST json_config 00:06:01.052 ************************************ 00:06:01.052 00:51:50 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:01.052 00:51:50 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.052 00:51:50 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:01.052 00:51:50 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.052 00:51:50 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.052 00:51:50 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.052 00:51:50 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.052 00:51:50 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.052 00:51:50 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.052 00:51:50 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.052 00:51:50 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.052 00:51:50 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.052 00:51:50 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.052 00:51:50 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:01.052 00:51:50 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:01.052 00:51:50 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.052 00:51:50 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.052 00:51:50 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:01.052 00:51:50 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.053 00:51:50 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.053 00:51:50 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.053 00:51:50 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.053 00:51:50 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.053 00:51:50 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.053 00:51:50 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.053 00:51:50 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.053 00:51:50 json_config -- paths/export.sh@5 -- # export PATH 00:06:01.053 00:51:50 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.053 00:51:50 json_config -- nvmf/common.sh@47 -- # : 0 00:06:01.053 00:51:50 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:01.053 00:51:50 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:01.053 00:51:50 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.053 00:51:50 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.053 00:51:50 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.053 00:51:50 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:01.053 00:51:50 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:01.053 00:51:50 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:01.053 INFO: JSON configuration test init 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:01.053 00:51:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.053 00:51:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:01.053 00:51:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.053 00:51:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.053 00:51:50 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:01.053 00:51:50 json_config -- json_config/common.sh@9 -- # local app=target 00:06:01.053 00:51:50 json_config -- json_config/common.sh@10 -- # shift 00:06:01.053 00:51:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:01.053 00:51:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:01.053 00:51:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:01.053 00:51:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.053 00:51:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.053 00:51:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1011684 00:06:01.053 00:51:50 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:01.053 00:51:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:01.053 Waiting for target to run... 00:06:01.053 00:51:50 json_config -- json_config/common.sh@25 -- # waitforlisten 1011684 /var/tmp/spdk_tgt.sock 00:06:01.053 00:51:50 json_config -- common/autotest_common.sh@829 -- # '[' -z 1011684 ']' 00:06:01.053 00:51:50 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:01.053 00:51:50 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.053 00:51:50 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:01.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:01.053 00:51:50 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.053 00:51:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.311 [2024-07-14 00:51:50.493452] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:01.311 [2024-07-14 00:51:50.493549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011684 ] 00:06:01.311 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.569 [2024-07-14 00:51:50.837128] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.569 [2024-07-14 00:51:50.900128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.134 00:51:51 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.134 00:51:51 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:02.134 00:51:51 json_config -- json_config/common.sh@26 -- # echo '' 00:06:02.134 00:06:02.134 00:51:51 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:02.134 00:51:51 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:02.134 00:51:51 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.134 00:51:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.134 00:51:51 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:02.134 00:51:51 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:02.134 00:51:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:02.134 00:51:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.134 00:51:51 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:02.134 00:51:51 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:02.134 00:51:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:05.420 00:51:54 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:05.420 00:51:54 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:05.420 00:51:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.420 00:51:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.420 00:51:54 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:05.420 00:51:54 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:05.420 00:51:54 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:05.420 00:51:54 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:05.420 00:51:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:05.420 00:51:54 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:05.677 00:51:54 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:05.677 00:51:54 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:05.677 00:51:54 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:05.677 00:51:54 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:05.677 00:51:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:05.677 00:51:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.677 00:51:54 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:05.677 00:51:54 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:05.677 00:51:54 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:05.677 00:51:54 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:05.677 00:51:54 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:05.677 00:51:54 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:05.677 00:51:54 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:05.677 00:51:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.677 00:51:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.677 00:51:54 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:05.677 00:51:54 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:05.677 00:51:54 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:05.677 00:51:54 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:05.677 00:51:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:05.935 MallocForNvmf0 00:06:05.935 00:51:55 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:05.935 00:51:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:06.192 MallocForNvmf1 00:06:06.192 00:51:55 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:06.192 00:51:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:06.450 [2024-07-14 00:51:55.614956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:06.450 00:51:55 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:06.450 00:51:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:06.717 00:51:55 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:06.717 00:51:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:07.014 00:51:56 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:07.014 00:51:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:07.014 00:51:56 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:07.014 00:51:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:07.271 [2024-07-14 00:51:56.598195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:07.271 00:51:56 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:07.271 00:51:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.271 00:51:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.271 00:51:56 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:07.271 00:51:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.271 00:51:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.271 00:51:56 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:07.271 00:51:56 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:07.271 00:51:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:07.529 MallocBdevForConfigChangeCheck 00:06:07.529 00:51:56 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:07.529 00:51:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.529 00:51:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.529 00:51:56 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:07.529 00:51:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:08.095 00:51:57 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:08.095 INFO: shutting down applications... 00:06:08.095 00:51:57 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:08.095 00:51:57 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:08.095 00:51:57 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:08.095 00:51:57 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:09.994 Calling clear_iscsi_subsystem 00:06:09.994 Calling clear_nvmf_subsystem 00:06:09.994 Calling clear_nbd_subsystem 00:06:09.994 Calling clear_ublk_subsystem 00:06:09.994 Calling clear_vhost_blk_subsystem 00:06:09.994 Calling clear_vhost_scsi_subsystem 00:06:09.994 Calling clear_bdev_subsystem 00:06:09.994 00:51:58 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:09.994 00:51:58 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:09.994 00:51:58 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:09.994 00:51:58 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.994 00:51:58 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:09.994 00:51:58 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:09.994 00:51:59 json_config -- json_config/json_config.sh@345 -- # break 00:06:09.994 00:51:59 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:09.994 00:51:59 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:09.994 00:51:59 json_config -- json_config/common.sh@31 -- # local app=target 00:06:09.994 00:51:59 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:09.994 00:51:59 json_config -- json_config/common.sh@35 -- # [[ -n 1011684 ]] 00:06:09.994 00:51:59 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1011684 00:06:09.994 00:51:59 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:09.994 00:51:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.994 00:51:59 json_config -- json_config/common.sh@41 -- # kill -0 1011684 00:06:09.994 00:51:59 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.561 00:51:59 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.561 00:51:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.561 00:51:59 json_config -- json_config/common.sh@41 -- # kill -0 1011684 00:06:10.561 00:51:59 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:10.561 00:51:59 json_config -- json_config/common.sh@43 -- # break 00:06:10.561 00:51:59 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:10.561 00:51:59 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:10.561 SPDK target shutdown done 00:06:10.561 00:51:59 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:10.561 INFO: relaunching applications... 00:06:10.561 00:51:59 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.561 00:51:59 json_config -- json_config/common.sh@9 -- # local app=target 00:06:10.561 00:51:59 json_config -- json_config/common.sh@10 -- # shift 00:06:10.561 00:51:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.561 00:51:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.561 00:51:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.561 00:51:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.561 00:51:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.561 00:51:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1012988 00:06:10.562 00:51:59 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.562 00:51:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.562 Waiting for target to run... 00:06:10.562 00:51:59 json_config -- json_config/common.sh@25 -- # waitforlisten 1012988 /var/tmp/spdk_tgt.sock 00:06:10.562 00:51:59 json_config -- common/autotest_common.sh@829 -- # '[' -z 1012988 ']' 00:06:10.562 00:51:59 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.562 00:51:59 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.562 00:51:59 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.562 00:51:59 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.562 00:51:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.562 [2024-07-14 00:51:59.866724] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:10.562 [2024-07-14 00:51:59.866819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012988 ] 00:06:10.562 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.128 [2024-07-14 00:52:00.378947] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.128 [2024-07-14 00:52:00.461050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.409 [2024-07-14 00:52:03.491234] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.409 [2024-07-14 00:52:03.523669] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:14.973 00:52:04 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.973 00:52:04 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:14.973 00:52:04 json_config -- json_config/common.sh@26 -- # echo '' 00:06:14.973 00:06:14.973 00:52:04 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:14.973 00:52:04 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:14.973 INFO: Checking if target configuration is the same... 00:06:14.973 00:52:04 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.973 00:52:04 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:14.974 00:52:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:14.974 + '[' 2 -ne 2 ']' 00:06:14.974 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:14.974 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:14.974 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:14.974 +++ basename /dev/fd/62 00:06:14.974 ++ mktemp /tmp/62.XXX 00:06:14.974 + tmp_file_1=/tmp/62.Kyx 00:06:14.974 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:14.974 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:14.974 + tmp_file_2=/tmp/spdk_tgt_config.json.sIV 00:06:14.974 + ret=0 00:06:14.974 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.539 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.539 + diff -u /tmp/62.Kyx /tmp/spdk_tgt_config.json.sIV 00:06:15.539 + echo 'INFO: JSON config files are the same' 00:06:15.539 INFO: JSON config files are the same 00:06:15.539 + rm /tmp/62.Kyx /tmp/spdk_tgt_config.json.sIV 00:06:15.539 + exit 0 00:06:15.539 00:52:04 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:15.539 00:52:04 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:15.539 INFO: changing configuration and checking if this can be detected... 00:06:15.539 00:52:04 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:15.539 00:52:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:15.539 00:52:04 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.539 00:52:04 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:15.539 00:52:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.539 + '[' 2 -ne 2 ']' 00:06:15.539 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:15.539 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:15.797 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:15.797 +++ basename /dev/fd/62 00:06:15.797 ++ mktemp /tmp/62.XXX 00:06:15.797 + tmp_file_1=/tmp/62.mD4 00:06:15.797 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.797 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:15.797 + tmp_file_2=/tmp/spdk_tgt_config.json.DiE 00:06:15.797 + ret=0 00:06:15.797 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:16.055 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:16.055 + diff -u /tmp/62.mD4 /tmp/spdk_tgt_config.json.DiE 00:06:16.055 + ret=1 00:06:16.055 + echo '=== Start of file: /tmp/62.mD4 ===' 00:06:16.055 + cat /tmp/62.mD4 00:06:16.055 + echo '=== End of file: /tmp/62.mD4 ===' 00:06:16.055 + echo '' 00:06:16.055 + echo '=== Start of file: /tmp/spdk_tgt_config.json.DiE ===' 00:06:16.055 + cat /tmp/spdk_tgt_config.json.DiE 00:06:16.055 + echo '=== End of file: /tmp/spdk_tgt_config.json.DiE ===' 00:06:16.055 + echo '' 00:06:16.055 + rm /tmp/62.mD4 /tmp/spdk_tgt_config.json.DiE 00:06:16.055 + exit 1 00:06:16.055 00:52:05 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:16.055 INFO: configuration change detected. 00:06:16.055 00:52:05 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:16.055 00:52:05 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:16.055 00:52:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.055 00:52:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.055 00:52:05 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:16.055 00:52:05 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:16.055 00:52:05 json_config -- json_config/json_config.sh@317 -- # [[ -n 1012988 ]] 00:06:16.055 00:52:05 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:16.055 00:52:05 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:16.055 00:52:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.055 00:52:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.055 00:52:05 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:16.055 00:52:05 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:16.055 00:52:05 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:16.055 00:52:05 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:16.055 00:52:05 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:16.055 00:52:05 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:16.055 00:52:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.055 00:52:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.055 00:52:05 json_config -- json_config/json_config.sh@323 -- # killprocess 1012988 00:06:16.055 00:52:05 json_config -- common/autotest_common.sh@948 -- # '[' -z 1012988 ']' 00:06:16.055 00:52:05 json_config -- common/autotest_common.sh@952 -- # kill -0 1012988 00:06:16.055 00:52:05 json_config -- common/autotest_common.sh@953 -- # uname 00:06:16.055 00:52:05 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.055 00:52:05 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1012988 00:06:16.055 00:52:05 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.055 00:52:05 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.055 00:52:05 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1012988' 00:06:16.055 killing process with pid 1012988 00:06:16.055 00:52:05 json_config -- common/autotest_common.sh@967 -- # kill 1012988 00:06:16.055 00:52:05 json_config -- common/autotest_common.sh@972 -- # wait 1012988 00:06:17.955 00:52:07 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:17.955 00:52:07 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:17.955 00:52:07 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.955 00:52:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.955 00:52:07 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:17.955 00:52:07 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:17.955 INFO: Success 00:06:17.955 00:06:17.955 real 0m16.709s 00:06:17.955 user 0m18.623s 00:06:17.955 sys 0m2.023s 00:06:17.955 00:52:07 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.955 00:52:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.955 ************************************ 00:06:17.955 END TEST json_config 00:06:17.955 ************************************ 00:06:17.955 00:52:07 -- common/autotest_common.sh@1142 -- # return 0 00:06:17.955 00:52:07 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:17.955 00:52:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.955 00:52:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.955 00:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:17.955 ************************************ 00:06:17.955 START TEST json_config_extra_key 00:06:17.955 ************************************ 00:06:17.955 00:52:07 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:17.955 00:52:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:17.955 00:52:07 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.955 00:52:07 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.955 00:52:07 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.955 00:52:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.955 00:52:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.955 00:52:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.955 00:52:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:17.955 00:52:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:17.955 00:52:07 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:17.955 00:52:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:17.955 00:52:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:17.955 00:52:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:17.955 00:52:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:17.955 00:52:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:17.955 00:52:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:17.955 00:52:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:17.956 00:52:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:17.956 00:52:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:17.956 00:52:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:17.956 00:52:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:17.956 INFO: launching applications... 00:06:17.956 00:52:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:17.956 00:52:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:17.956 00:52:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:17.956 00:52:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:17.956 00:52:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:17.956 00:52:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:17.956 00:52:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:17.956 00:52:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:17.956 00:52:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1013919 00:06:17.956 00:52:07 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:17.956 00:52:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:17.956 Waiting for target to run... 00:06:17.956 00:52:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1013919 /var/tmp/spdk_tgt.sock 00:06:17.956 00:52:07 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1013919 ']' 00:06:17.956 00:52:07 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:17.956 00:52:07 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.956 00:52:07 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:17.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:17.956 00:52:07 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.956 00:52:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:17.956 [2024-07-14 00:52:07.250131] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:17.956 [2024-07-14 00:52:07.250214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1013919 ] 00:06:17.956 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.524 [2024-07-14 00:52:07.756094] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.524 [2024-07-14 00:52:07.838157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.089 00:52:08 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.089 00:52:08 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:19.089 00:52:08 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:19.089 00:06:19.089 00:52:08 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:19.089 INFO: shutting down applications... 00:06:19.089 00:52:08 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:19.089 00:52:08 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:19.089 00:52:08 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:19.089 00:52:08 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1013919 ]] 00:06:19.089 00:52:08 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1013919 00:06:19.089 00:52:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:19.089 00:52:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.089 00:52:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1013919 00:06:19.089 00:52:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:19.347 00:52:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:19.347 00:52:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.347 00:52:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1013919 00:06:19.347 00:52:08 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:19.347 00:52:08 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:19.347 00:52:08 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:19.347 00:52:08 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:19.347 SPDK target shutdown done 00:06:19.347 00:52:08 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:19.347 Success 00:06:19.347 00:06:19.347 real 0m1.591s 00:06:19.347 user 0m1.418s 00:06:19.347 sys 0m0.610s 00:06:19.347 00:52:08 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.347 00:52:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:19.347 ************************************ 00:06:19.347 END TEST json_config_extra_key 00:06:19.347 ************************************ 00:06:19.347 00:52:08 -- common/autotest_common.sh@1142 -- # return 0 00:06:19.347 00:52:08 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:19.347 00:52:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.347 00:52:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.347 00:52:08 -- common/autotest_common.sh@10 -- # set +x 00:06:19.606 ************************************ 00:06:19.606 START TEST alias_rpc 00:06:19.606 ************************************ 00:06:19.606 00:52:08 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:19.606 * Looking for test storage... 00:06:19.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:19.606 00:52:08 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:19.606 00:52:08 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1014218 00:06:19.606 00:52:08 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:19.606 00:52:08 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1014218 00:06:19.606 00:52:08 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1014218 ']' 00:06:19.606 00:52:08 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.606 00:52:08 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.606 00:52:08 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.606 00:52:08 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.606 00:52:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.606 [2024-07-14 00:52:08.884565] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:19.606 [2024-07-14 00:52:08.884646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1014218 ] 00:06:19.606 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.606 [2024-07-14 00:52:08.942172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.864 [2024-07-14 00:52:09.033001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.123 00:52:09 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.123 00:52:09 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:20.123 00:52:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:20.382 00:52:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1014218 00:06:20.382 00:52:09 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1014218 ']' 00:06:20.382 00:52:09 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1014218 00:06:20.382 00:52:09 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:20.382 00:52:09 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.382 00:52:09 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1014218 00:06:20.382 00:52:09 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.382 00:52:09 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.382 00:52:09 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1014218' 00:06:20.382 killing process with pid 1014218 00:06:20.382 00:52:09 alias_rpc -- common/autotest_common.sh@967 -- # kill 1014218 00:06:20.382 00:52:09 alias_rpc -- common/autotest_common.sh@972 -- # wait 1014218 00:06:20.640 00:06:20.640 real 0m1.199s 00:06:20.640 user 0m1.276s 00:06:20.640 sys 0m0.421s 00:06:20.641 00:52:09 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.641 00:52:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.641 ************************************ 00:06:20.641 END TEST alias_rpc 00:06:20.641 ************************************ 00:06:20.641 00:52:10 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.641 00:52:10 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:20.641 00:52:10 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:20.641 00:52:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.641 00:52:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.641 00:52:10 -- common/autotest_common.sh@10 -- # set +x 00:06:20.641 ************************************ 00:06:20.641 START TEST spdkcli_tcp 00:06:20.641 ************************************ 00:06:20.641 00:52:10 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:20.900 * Looking for test storage... 00:06:20.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:20.900 00:52:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:20.900 00:52:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:20.900 00:52:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:20.900 00:52:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:20.900 00:52:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:20.900 00:52:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:20.900 00:52:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:20.900 00:52:10 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:20.900 00:52:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.900 00:52:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1014413 00:06:20.900 00:52:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:20.900 00:52:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1014413 00:06:20.900 00:52:10 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1014413 ']' 00:06:20.900 00:52:10 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.900 00:52:10 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.900 00:52:10 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.900 00:52:10 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.900 00:52:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.900 [2024-07-14 00:52:10.138094] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:20.900 [2024-07-14 00:52:10.138198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1014413 ] 00:06:20.900 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.900 [2024-07-14 00:52:10.198476] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.900 [2024-07-14 00:52:10.286238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.900 [2024-07-14 00:52:10.286241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.158 00:52:10 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.158 00:52:10 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:21.158 00:52:10 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1014423 00:06:21.158 00:52:10 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:21.158 00:52:10 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:21.418 [ 00:06:21.418 "bdev_malloc_delete", 00:06:21.418 "bdev_malloc_create", 00:06:21.418 "bdev_null_resize", 00:06:21.418 "bdev_null_delete", 00:06:21.418 "bdev_null_create", 00:06:21.418 "bdev_nvme_cuse_unregister", 00:06:21.418 "bdev_nvme_cuse_register", 00:06:21.418 "bdev_opal_new_user", 00:06:21.418 "bdev_opal_set_lock_state", 00:06:21.418 "bdev_opal_delete", 00:06:21.418 "bdev_opal_get_info", 00:06:21.418 "bdev_opal_create", 00:06:21.418 "bdev_nvme_opal_revert", 00:06:21.418 "bdev_nvme_opal_init", 00:06:21.418 "bdev_nvme_send_cmd", 00:06:21.418 "bdev_nvme_get_path_iostat", 00:06:21.418 "bdev_nvme_get_mdns_discovery_info", 00:06:21.418 "bdev_nvme_stop_mdns_discovery", 00:06:21.418 "bdev_nvme_start_mdns_discovery", 00:06:21.418 "bdev_nvme_set_multipath_policy", 00:06:21.418 "bdev_nvme_set_preferred_path", 00:06:21.418 "bdev_nvme_get_io_paths", 00:06:21.418 "bdev_nvme_remove_error_injection", 00:06:21.418 "bdev_nvme_add_error_injection", 00:06:21.418 "bdev_nvme_get_discovery_info", 00:06:21.418 "bdev_nvme_stop_discovery", 00:06:21.418 "bdev_nvme_start_discovery", 00:06:21.418 "bdev_nvme_get_controller_health_info", 00:06:21.418 "bdev_nvme_disable_controller", 00:06:21.418 "bdev_nvme_enable_controller", 00:06:21.418 "bdev_nvme_reset_controller", 00:06:21.418 "bdev_nvme_get_transport_statistics", 00:06:21.418 "bdev_nvme_apply_firmware", 00:06:21.418 "bdev_nvme_detach_controller", 00:06:21.418 "bdev_nvme_get_controllers", 00:06:21.418 "bdev_nvme_attach_controller", 00:06:21.418 "bdev_nvme_set_hotplug", 00:06:21.418 "bdev_nvme_set_options", 00:06:21.418 "bdev_passthru_delete", 00:06:21.418 "bdev_passthru_create", 00:06:21.418 "bdev_lvol_set_parent_bdev", 00:06:21.418 "bdev_lvol_set_parent", 00:06:21.418 "bdev_lvol_check_shallow_copy", 00:06:21.418 "bdev_lvol_start_shallow_copy", 00:06:21.418 "bdev_lvol_grow_lvstore", 00:06:21.418 "bdev_lvol_get_lvols", 00:06:21.418 "bdev_lvol_get_lvstores", 00:06:21.418 "bdev_lvol_delete", 00:06:21.418 "bdev_lvol_set_read_only", 00:06:21.418 "bdev_lvol_resize", 00:06:21.418 "bdev_lvol_decouple_parent", 00:06:21.418 "bdev_lvol_inflate", 00:06:21.418 "bdev_lvol_rename", 00:06:21.418 "bdev_lvol_clone_bdev", 00:06:21.418 "bdev_lvol_clone", 00:06:21.418 "bdev_lvol_snapshot", 00:06:21.418 "bdev_lvol_create", 00:06:21.418 "bdev_lvol_delete_lvstore", 00:06:21.418 "bdev_lvol_rename_lvstore", 00:06:21.418 "bdev_lvol_create_lvstore", 00:06:21.418 "bdev_raid_set_options", 00:06:21.418 "bdev_raid_remove_base_bdev", 00:06:21.418 "bdev_raid_add_base_bdev", 00:06:21.418 "bdev_raid_delete", 00:06:21.418 "bdev_raid_create", 00:06:21.418 "bdev_raid_get_bdevs", 00:06:21.418 "bdev_error_inject_error", 00:06:21.418 "bdev_error_delete", 00:06:21.418 "bdev_error_create", 00:06:21.418 "bdev_split_delete", 00:06:21.418 "bdev_split_create", 00:06:21.418 "bdev_delay_delete", 00:06:21.418 "bdev_delay_create", 00:06:21.418 "bdev_delay_update_latency", 00:06:21.418 "bdev_zone_block_delete", 00:06:21.418 "bdev_zone_block_create", 00:06:21.418 "blobfs_create", 00:06:21.418 "blobfs_detect", 00:06:21.418 "blobfs_set_cache_size", 00:06:21.418 "bdev_aio_delete", 00:06:21.418 "bdev_aio_rescan", 00:06:21.418 "bdev_aio_create", 00:06:21.418 "bdev_ftl_set_property", 00:06:21.418 "bdev_ftl_get_properties", 00:06:21.418 "bdev_ftl_get_stats", 00:06:21.418 "bdev_ftl_unmap", 00:06:21.418 "bdev_ftl_unload", 00:06:21.418 "bdev_ftl_delete", 00:06:21.418 "bdev_ftl_load", 00:06:21.418 "bdev_ftl_create", 00:06:21.418 "bdev_virtio_attach_controller", 00:06:21.418 "bdev_virtio_scsi_get_devices", 00:06:21.418 "bdev_virtio_detach_controller", 00:06:21.418 "bdev_virtio_blk_set_hotplug", 00:06:21.418 "bdev_iscsi_delete", 00:06:21.418 "bdev_iscsi_create", 00:06:21.418 "bdev_iscsi_set_options", 00:06:21.418 "accel_error_inject_error", 00:06:21.418 "ioat_scan_accel_module", 00:06:21.418 "dsa_scan_accel_module", 00:06:21.418 "iaa_scan_accel_module", 00:06:21.418 "vfu_virtio_create_scsi_endpoint", 00:06:21.418 "vfu_virtio_scsi_remove_target", 00:06:21.418 "vfu_virtio_scsi_add_target", 00:06:21.418 "vfu_virtio_create_blk_endpoint", 00:06:21.418 "vfu_virtio_delete_endpoint", 00:06:21.418 "keyring_file_remove_key", 00:06:21.418 "keyring_file_add_key", 00:06:21.418 "keyring_linux_set_options", 00:06:21.418 "iscsi_get_histogram", 00:06:21.418 "iscsi_enable_histogram", 00:06:21.418 "iscsi_set_options", 00:06:21.418 "iscsi_get_auth_groups", 00:06:21.418 "iscsi_auth_group_remove_secret", 00:06:21.418 "iscsi_auth_group_add_secret", 00:06:21.418 "iscsi_delete_auth_group", 00:06:21.418 "iscsi_create_auth_group", 00:06:21.418 "iscsi_set_discovery_auth", 00:06:21.418 "iscsi_get_options", 00:06:21.418 "iscsi_target_node_request_logout", 00:06:21.418 "iscsi_target_node_set_redirect", 00:06:21.418 "iscsi_target_node_set_auth", 00:06:21.418 "iscsi_target_node_add_lun", 00:06:21.418 "iscsi_get_stats", 00:06:21.418 "iscsi_get_connections", 00:06:21.418 "iscsi_portal_group_set_auth", 00:06:21.418 "iscsi_start_portal_group", 00:06:21.418 "iscsi_delete_portal_group", 00:06:21.418 "iscsi_create_portal_group", 00:06:21.418 "iscsi_get_portal_groups", 00:06:21.418 "iscsi_delete_target_node", 00:06:21.418 "iscsi_target_node_remove_pg_ig_maps", 00:06:21.418 "iscsi_target_node_add_pg_ig_maps", 00:06:21.418 "iscsi_create_target_node", 00:06:21.418 "iscsi_get_target_nodes", 00:06:21.418 "iscsi_delete_initiator_group", 00:06:21.418 "iscsi_initiator_group_remove_initiators", 00:06:21.418 "iscsi_initiator_group_add_initiators", 00:06:21.418 "iscsi_create_initiator_group", 00:06:21.418 "iscsi_get_initiator_groups", 00:06:21.418 "nvmf_set_crdt", 00:06:21.418 "nvmf_set_config", 00:06:21.418 "nvmf_set_max_subsystems", 00:06:21.418 "nvmf_stop_mdns_prr", 00:06:21.418 "nvmf_publish_mdns_prr", 00:06:21.418 "nvmf_subsystem_get_listeners", 00:06:21.418 "nvmf_subsystem_get_qpairs", 00:06:21.418 "nvmf_subsystem_get_controllers", 00:06:21.418 "nvmf_get_stats", 00:06:21.418 "nvmf_get_transports", 00:06:21.418 "nvmf_create_transport", 00:06:21.418 "nvmf_get_targets", 00:06:21.418 "nvmf_delete_target", 00:06:21.418 "nvmf_create_target", 00:06:21.418 "nvmf_subsystem_allow_any_host", 00:06:21.418 "nvmf_subsystem_remove_host", 00:06:21.418 "nvmf_subsystem_add_host", 00:06:21.418 "nvmf_ns_remove_host", 00:06:21.418 "nvmf_ns_add_host", 00:06:21.418 "nvmf_subsystem_remove_ns", 00:06:21.418 "nvmf_subsystem_add_ns", 00:06:21.418 "nvmf_subsystem_listener_set_ana_state", 00:06:21.418 "nvmf_discovery_get_referrals", 00:06:21.418 "nvmf_discovery_remove_referral", 00:06:21.418 "nvmf_discovery_add_referral", 00:06:21.418 "nvmf_subsystem_remove_listener", 00:06:21.418 "nvmf_subsystem_add_listener", 00:06:21.418 "nvmf_delete_subsystem", 00:06:21.418 "nvmf_create_subsystem", 00:06:21.418 "nvmf_get_subsystems", 00:06:21.418 "env_dpdk_get_mem_stats", 00:06:21.418 "nbd_get_disks", 00:06:21.418 "nbd_stop_disk", 00:06:21.418 "nbd_start_disk", 00:06:21.418 "ublk_recover_disk", 00:06:21.418 "ublk_get_disks", 00:06:21.418 "ublk_stop_disk", 00:06:21.418 "ublk_start_disk", 00:06:21.418 "ublk_destroy_target", 00:06:21.418 "ublk_create_target", 00:06:21.418 "virtio_blk_create_transport", 00:06:21.418 "virtio_blk_get_transports", 00:06:21.418 "vhost_controller_set_coalescing", 00:06:21.418 "vhost_get_controllers", 00:06:21.418 "vhost_delete_controller", 00:06:21.418 "vhost_create_blk_controller", 00:06:21.418 "vhost_scsi_controller_remove_target", 00:06:21.418 "vhost_scsi_controller_add_target", 00:06:21.418 "vhost_start_scsi_controller", 00:06:21.418 "vhost_create_scsi_controller", 00:06:21.418 "thread_set_cpumask", 00:06:21.418 "framework_get_governor", 00:06:21.418 "framework_get_scheduler", 00:06:21.418 "framework_set_scheduler", 00:06:21.418 "framework_get_reactors", 00:06:21.418 "thread_get_io_channels", 00:06:21.418 "thread_get_pollers", 00:06:21.418 "thread_get_stats", 00:06:21.418 "framework_monitor_context_switch", 00:06:21.418 "spdk_kill_instance", 00:06:21.418 "log_enable_timestamps", 00:06:21.418 "log_get_flags", 00:06:21.418 "log_clear_flag", 00:06:21.418 "log_set_flag", 00:06:21.418 "log_get_level", 00:06:21.418 "log_set_level", 00:06:21.418 "log_get_print_level", 00:06:21.418 "log_set_print_level", 00:06:21.418 "framework_enable_cpumask_locks", 00:06:21.418 "framework_disable_cpumask_locks", 00:06:21.418 "framework_wait_init", 00:06:21.418 "framework_start_init", 00:06:21.418 "scsi_get_devices", 00:06:21.418 "bdev_get_histogram", 00:06:21.418 "bdev_enable_histogram", 00:06:21.418 "bdev_set_qos_limit", 00:06:21.418 "bdev_set_qd_sampling_period", 00:06:21.419 "bdev_get_bdevs", 00:06:21.419 "bdev_reset_iostat", 00:06:21.419 "bdev_get_iostat", 00:06:21.419 "bdev_examine", 00:06:21.419 "bdev_wait_for_examine", 00:06:21.419 "bdev_set_options", 00:06:21.419 "notify_get_notifications", 00:06:21.419 "notify_get_types", 00:06:21.419 "accel_get_stats", 00:06:21.419 "accel_set_options", 00:06:21.419 "accel_set_driver", 00:06:21.419 "accel_crypto_key_destroy", 00:06:21.419 "accel_crypto_keys_get", 00:06:21.419 "accel_crypto_key_create", 00:06:21.419 "accel_assign_opc", 00:06:21.419 "accel_get_module_info", 00:06:21.419 "accel_get_opc_assignments", 00:06:21.419 "vmd_rescan", 00:06:21.419 "vmd_remove_device", 00:06:21.419 "vmd_enable", 00:06:21.419 "sock_get_default_impl", 00:06:21.419 "sock_set_default_impl", 00:06:21.419 "sock_impl_set_options", 00:06:21.419 "sock_impl_get_options", 00:06:21.419 "iobuf_get_stats", 00:06:21.419 "iobuf_set_options", 00:06:21.419 "keyring_get_keys", 00:06:21.419 "framework_get_pci_devices", 00:06:21.419 "framework_get_config", 00:06:21.419 "framework_get_subsystems", 00:06:21.419 "vfu_tgt_set_base_path", 00:06:21.419 "trace_get_info", 00:06:21.419 "trace_get_tpoint_group_mask", 00:06:21.419 "trace_disable_tpoint_group", 00:06:21.419 "trace_enable_tpoint_group", 00:06:21.419 "trace_clear_tpoint_mask", 00:06:21.419 "trace_set_tpoint_mask", 00:06:21.419 "spdk_get_version", 00:06:21.419 "rpc_get_methods" 00:06:21.419 ] 00:06:21.419 00:52:10 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:21.419 00:52:10 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:21.419 00:52:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.419 00:52:10 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:21.419 00:52:10 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1014413 00:06:21.419 00:52:10 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1014413 ']' 00:06:21.419 00:52:10 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1014413 00:06:21.419 00:52:10 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:21.419 00:52:10 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.419 00:52:10 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1014413 00:06:21.679 00:52:10 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.679 00:52:10 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.679 00:52:10 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1014413' 00:06:21.679 killing process with pid 1014413 00:06:21.679 00:52:10 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1014413 00:06:21.679 00:52:10 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1014413 00:06:21.938 00:06:21.938 real 0m1.227s 00:06:21.938 user 0m2.157s 00:06:21.938 sys 0m0.476s 00:06:21.938 00:52:11 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.938 00:52:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.938 ************************************ 00:06:21.938 END TEST spdkcli_tcp 00:06:21.938 ************************************ 00:06:21.938 00:52:11 -- common/autotest_common.sh@1142 -- # return 0 00:06:21.938 00:52:11 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:21.938 00:52:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.938 00:52:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.938 00:52:11 -- common/autotest_common.sh@10 -- # set +x 00:06:21.938 ************************************ 00:06:21.938 START TEST dpdk_mem_utility 00:06:21.938 ************************************ 00:06:21.938 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:22.197 * Looking for test storage... 00:06:22.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:22.197 00:52:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:22.197 00:52:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1014617 00:06:22.197 00:52:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.197 00:52:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1014617 00:06:22.197 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1014617 ']' 00:06:22.197 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.197 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.197 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.197 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.197 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:22.197 [2024-07-14 00:52:11.411704] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:22.197 [2024-07-14 00:52:11.411788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1014617 ] 00:06:22.197 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.197 [2024-07-14 00:52:11.470383] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.197 [2024-07-14 00:52:11.557003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.456 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.456 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:22.456 00:52:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:22.456 00:52:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:22.456 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.456 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:22.456 { 00:06:22.456 "filename": "/tmp/spdk_mem_dump.txt" 00:06:22.456 } 00:06:22.456 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.456 00:52:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:22.717 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:22.717 1 heaps totaling size 814.000000 MiB 00:06:22.717 size: 814.000000 MiB heap id: 0 00:06:22.717 end heaps---------- 00:06:22.717 8 mempools totaling size 598.116089 MiB 00:06:22.717 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:22.717 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:22.717 size: 84.521057 MiB name: bdev_io_1014617 00:06:22.717 size: 51.011292 MiB name: evtpool_1014617 00:06:22.717 size: 50.003479 MiB name: msgpool_1014617 00:06:22.717 size: 21.763794 MiB name: PDU_Pool 00:06:22.717 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:22.717 size: 0.026123 MiB name: Session_Pool 00:06:22.717 end mempools------- 00:06:22.717 6 memzones totaling size 4.142822 MiB 00:06:22.717 size: 1.000366 MiB name: RG_ring_0_1014617 00:06:22.717 size: 1.000366 MiB name: RG_ring_1_1014617 00:06:22.717 size: 1.000366 MiB name: RG_ring_4_1014617 00:06:22.717 size: 1.000366 MiB name: RG_ring_5_1014617 00:06:22.717 size: 0.125366 MiB name: RG_ring_2_1014617 00:06:22.717 size: 0.015991 MiB name: RG_ring_3_1014617 00:06:22.717 end memzones------- 00:06:22.717 00:52:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:22.717 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:22.717 list of free elements. size: 12.519348 MiB 00:06:22.717 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:22.717 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:22.717 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:22.717 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:22.717 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:22.717 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:22.717 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:22.717 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:22.717 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:22.717 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:22.717 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:22.717 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:22.717 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:22.717 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:22.717 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:22.717 list of standard malloc elements. size: 199.218079 MiB 00:06:22.717 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:22.717 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:22.717 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:22.717 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:22.717 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:22.717 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:22.717 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:22.717 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:22.717 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:22.717 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:22.717 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:22.717 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:22.717 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:22.717 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:22.717 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:22.717 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:22.717 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:22.717 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:22.717 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:22.717 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:22.717 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:22.717 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:22.717 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:22.717 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:22.717 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:22.717 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:22.717 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:22.717 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:22.717 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:22.717 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:22.717 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:22.717 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:22.717 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:22.717 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:22.717 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:22.717 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:22.717 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:22.717 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:22.717 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:22.717 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:22.717 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:22.717 list of memzone associated elements. size: 602.262573 MiB 00:06:22.717 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:22.717 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:22.717 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:22.717 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:22.717 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:22.717 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1014617_0 00:06:22.717 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:22.717 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1014617_0 00:06:22.717 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:22.717 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1014617_0 00:06:22.717 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:22.717 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:22.717 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:22.717 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:22.717 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:22.717 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1014617 00:06:22.717 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:22.717 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1014617 00:06:22.717 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:22.717 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1014617 00:06:22.717 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:22.717 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:22.717 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:22.717 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:22.717 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:22.717 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:22.717 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:22.717 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:22.717 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:22.717 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1014617 00:06:22.717 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:22.717 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1014617 00:06:22.717 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:22.717 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1014617 00:06:22.717 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:22.717 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1014617 00:06:22.717 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:22.717 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1014617 00:06:22.717 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:22.717 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:22.717 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:22.717 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:22.717 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:22.717 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:22.717 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:22.717 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1014617 00:06:22.717 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:22.717 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:22.717 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:22.717 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:22.717 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:22.717 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1014617 00:06:22.717 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:22.717 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:22.717 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:22.717 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1014617 00:06:22.717 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:22.717 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1014617 00:06:22.717 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:22.717 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:22.717 00:52:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:22.718 00:52:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1014617 00:06:22.718 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1014617 ']' 00:06:22.718 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1014617 00:06:22.718 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:22.718 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.718 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1014617 00:06:22.718 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.718 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.718 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1014617' 00:06:22.718 killing process with pid 1014617 00:06:22.718 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1014617 00:06:22.718 00:52:11 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1014617 00:06:22.976 00:06:22.976 real 0m1.068s 00:06:22.976 user 0m1.031s 00:06:22.976 sys 0m0.403s 00:06:22.976 00:52:12 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.976 00:52:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:22.976 ************************************ 00:06:22.976 END TEST dpdk_mem_utility 00:06:22.976 ************************************ 00:06:23.265 00:52:12 -- common/autotest_common.sh@1142 -- # return 0 00:06:23.265 00:52:12 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:23.265 00:52:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.265 00:52:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.265 00:52:12 -- common/autotest_common.sh@10 -- # set +x 00:06:23.265 ************************************ 00:06:23.265 START TEST event 00:06:23.266 ************************************ 00:06:23.266 00:52:12 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:23.266 * Looking for test storage... 00:06:23.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:23.266 00:52:12 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:23.266 00:52:12 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:23.266 00:52:12 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:23.266 00:52:12 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:23.266 00:52:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.266 00:52:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.266 ************************************ 00:06:23.266 START TEST event_perf 00:06:23.266 ************************************ 00:06:23.266 00:52:12 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:23.266 Running I/O for 1 seconds...[2024-07-14 00:52:12.501212] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:23.266 [2024-07-14 00:52:12.501277] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1014806 ] 00:06:23.266 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.266 [2024-07-14 00:52:12.564877] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.526 [2024-07-14 00:52:12.663353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.526 [2024-07-14 00:52:12.663409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.526 [2024-07-14 00:52:12.663520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.526 [2024-07-14 00:52:12.663523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.462 Running I/O for 1 seconds... 00:06:24.462 lcore 0: 233540 00:06:24.462 lcore 1: 233537 00:06:24.462 lcore 2: 233537 00:06:24.462 lcore 3: 233539 00:06:24.462 done. 00:06:24.462 00:06:24.462 real 0m1.257s 00:06:24.462 user 0m4.165s 00:06:24.462 sys 0m0.088s 00:06:24.462 00:52:13 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.462 00:52:13 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.462 ************************************ 00:06:24.462 END TEST event_perf 00:06:24.462 ************************************ 00:06:24.462 00:52:13 event -- common/autotest_common.sh@1142 -- # return 0 00:06:24.462 00:52:13 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:24.462 00:52:13 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:24.462 00:52:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.462 00:52:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.462 ************************************ 00:06:24.462 START TEST event_reactor 00:06:24.462 ************************************ 00:06:24.462 00:52:13 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:24.462 [2024-07-14 00:52:13.804350] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:24.462 [2024-07-14 00:52:13.804414] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1014969 ] 00:06:24.462 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.462 [2024-07-14 00:52:13.866203] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.721 [2024-07-14 00:52:13.959852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.659 test_start 00:06:25.659 oneshot 00:06:25.659 tick 100 00:06:25.659 tick 100 00:06:25.659 tick 250 00:06:25.659 tick 100 00:06:25.659 tick 100 00:06:25.659 tick 100 00:06:25.659 tick 250 00:06:25.659 tick 500 00:06:25.659 tick 100 00:06:25.659 tick 100 00:06:25.659 tick 250 00:06:25.659 tick 100 00:06:25.659 tick 100 00:06:25.659 test_end 00:06:25.659 00:06:25.659 real 0m1.249s 00:06:25.659 user 0m1.161s 00:06:25.659 sys 0m0.083s 00:06:25.659 00:52:15 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.659 00:52:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:25.659 ************************************ 00:06:25.659 END TEST event_reactor 00:06:25.659 ************************************ 00:06:25.659 00:52:15 event -- common/autotest_common.sh@1142 -- # return 0 00:06:25.659 00:52:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:25.659 00:52:15 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:25.659 00:52:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.659 00:52:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.917 ************************************ 00:06:25.917 START TEST event_reactor_perf 00:06:25.917 ************************************ 00:06:25.917 00:52:15 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:25.917 [2024-07-14 00:52:15.102386] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:25.917 [2024-07-14 00:52:15.102450] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1015128 ] 00:06:25.917 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.918 [2024-07-14 00:52:15.166998] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.918 [2024-07-14 00:52:15.259897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.298 test_start 00:06:27.298 test_end 00:06:27.298 Performance: 357679 events per second 00:06:27.298 00:06:27.298 real 0m1.252s 00:06:27.298 user 0m1.159s 00:06:27.298 sys 0m0.088s 00:06:27.298 00:52:16 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.298 00:52:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.298 ************************************ 00:06:27.298 END TEST event_reactor_perf 00:06:27.298 ************************************ 00:06:27.298 00:52:16 event -- common/autotest_common.sh@1142 -- # return 0 00:06:27.299 00:52:16 event -- event/event.sh@49 -- # uname -s 00:06:27.299 00:52:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:27.299 00:52:16 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:27.299 00:52:16 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.299 00:52:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.299 00:52:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.299 ************************************ 00:06:27.299 START TEST event_scheduler 00:06:27.299 ************************************ 00:06:27.299 00:52:16 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:27.299 * Looking for test storage... 00:06:27.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:27.299 00:52:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:27.299 00:52:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1015361 00:06:27.299 00:52:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:27.299 00:52:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.299 00:52:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1015361 00:06:27.299 00:52:16 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1015361 ']' 00:06:27.299 00:52:16 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.299 00:52:16 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.299 00:52:16 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.299 00:52:16 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.299 00:52:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.299 [2024-07-14 00:52:16.484684] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:27.299 [2024-07-14 00:52:16.484763] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1015361 ] 00:06:27.299 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.299 [2024-07-14 00:52:16.543979] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.299 [2024-07-14 00:52:16.630353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.299 [2024-07-14 00:52:16.630417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.299 [2024-07-14 00:52:16.630482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.299 [2024-07-14 00:52:16.630485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.299 00:52:16 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.299 00:52:16 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:27.299 00:52:16 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:27.299 00:52:16 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.299 00:52:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.299 [2024-07-14 00:52:16.691264] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:27.299 [2024-07-14 00:52:16.691290] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:27.299 [2024-07-14 00:52:16.691321] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:27.299 [2024-07-14 00:52:16.691332] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:27.299 [2024-07-14 00:52:16.691342] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:27.299 00:52:16 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.299 00:52:16 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:27.299 00:52:16 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.299 00:52:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.559 [2024-07-14 00:52:16.784913] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:27.559 00:52:16 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.559 00:52:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:27.559 00:52:16 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.559 00:52:16 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.559 00:52:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.559 ************************************ 00:06:27.559 START TEST scheduler_create_thread 00:06:27.559 ************************************ 00:06:27.559 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:27.559 00:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:27.559 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.559 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.559 2 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.560 3 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.560 4 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.560 5 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.560 6 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.560 7 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.560 8 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.560 9 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.560 10 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.560 00:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.128 00:52:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.128 00:06:28.128 real 0m0.589s 00:06:28.128 user 0m0.011s 00:06:28.128 sys 0m0.002s 00:06:28.128 00:52:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.128 00:52:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.128 ************************************ 00:06:28.128 END TEST scheduler_create_thread 00:06:28.128 ************************************ 00:06:28.128 00:52:17 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:28.128 00:52:17 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:28.128 00:52:17 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1015361 00:06:28.128 00:52:17 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1015361 ']' 00:06:28.128 00:52:17 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1015361 00:06:28.128 00:52:17 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:28.128 00:52:17 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.128 00:52:17 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1015361 00:06:28.128 00:52:17 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:28.128 00:52:17 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:28.128 00:52:17 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1015361' 00:06:28.128 killing process with pid 1015361 00:06:28.128 00:52:17 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1015361 00:06:28.128 00:52:17 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1015361 00:06:28.696 [2024-07-14 00:52:17.876999] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:28.696 00:06:28.696 real 0m1.692s 00:06:28.696 user 0m2.145s 00:06:28.696 sys 0m0.332s 00:06:28.696 00:52:18 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.696 00:52:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.696 ************************************ 00:06:28.696 END TEST event_scheduler 00:06:28.696 ************************************ 00:06:28.696 00:52:18 event -- common/autotest_common.sh@1142 -- # return 0 00:06:28.696 00:52:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:28.954 00:52:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:28.954 00:52:18 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.954 00:52:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.954 00:52:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:28.954 ************************************ 00:06:28.954 START TEST app_repeat 00:06:28.954 ************************************ 00:06:28.954 00:52:18 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:28.954 00:52:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.954 00:52:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.954 00:52:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:28.954 00:52:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.954 00:52:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:28.954 00:52:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:28.954 00:52:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:28.954 00:52:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1015617 00:06:28.954 00:52:18 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:28.955 00:52:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.955 00:52:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1015617' 00:06:28.955 Process app_repeat pid: 1015617 00:06:28.955 00:52:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.955 00:52:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:28.955 spdk_app_start Round 0 00:06:28.955 00:52:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1015617 /var/tmp/spdk-nbd.sock 00:06:28.955 00:52:18 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1015617 ']' 00:06:28.955 00:52:18 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.955 00:52:18 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.955 00:52:18 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.955 00:52:18 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.955 00:52:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.955 [2024-07-14 00:52:18.163570] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:28.955 [2024-07-14 00:52:18.163633] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1015617 ] 00:06:28.955 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.955 [2024-07-14 00:52:18.224506] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.955 [2024-07-14 00:52:18.314654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.955 [2024-07-14 00:52:18.314658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.212 00:52:18 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.212 00:52:18 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:29.212 00:52:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.469 Malloc0 00:06:29.469 00:52:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.728 Malloc1 00:06:29.728 00:52:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.728 00:52:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.728 00:52:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.728 00:52:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.728 00:52:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.728 00:52:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.728 00:52:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.728 00:52:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.728 00:52:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.728 00:52:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.728 00:52:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.728 00:52:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.728 00:52:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:29.728 00:52:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.728 00:52:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.728 00:52:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.986 /dev/nbd0 00:06:29.986 00:52:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.986 00:52:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.986 00:52:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:29.986 00:52:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:29.986 00:52:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:29.986 00:52:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:29.986 00:52:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:29.986 00:52:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:29.986 00:52:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:29.986 00:52:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:29.986 00:52:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.986 1+0 records in 00:06:29.986 1+0 records out 00:06:29.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170875 s, 24.0 MB/s 00:06:29.986 00:52:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.986 00:52:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:29.986 00:52:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.986 00:52:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:29.986 00:52:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:29.986 00:52:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.986 00:52:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.986 00:52:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.244 /dev/nbd1 00:06:30.244 00:52:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.244 00:52:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.244 00:52:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:30.244 00:52:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.244 00:52:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.244 00:52:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.244 00:52:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:30.244 00:52:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.244 00:52:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.244 00:52:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.244 00:52:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.244 1+0 records in 00:06:30.244 1+0 records out 00:06:30.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203884 s, 20.1 MB/s 00:06:30.244 00:52:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.244 00:52:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.244 00:52:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.244 00:52:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.244 00:52:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.244 00:52:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.244 00:52:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.244 00:52:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.244 00:52:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.244 00:52:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:30.504 { 00:06:30.504 "nbd_device": "/dev/nbd0", 00:06:30.504 "bdev_name": "Malloc0" 00:06:30.504 }, 00:06:30.504 { 00:06:30.504 "nbd_device": "/dev/nbd1", 00:06:30.504 "bdev_name": "Malloc1" 00:06:30.504 } 00:06:30.504 ]' 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:30.504 { 00:06:30.504 "nbd_device": "/dev/nbd0", 00:06:30.504 "bdev_name": "Malloc0" 00:06:30.504 }, 00:06:30.504 { 00:06:30.504 "nbd_device": "/dev/nbd1", 00:06:30.504 "bdev_name": "Malloc1" 00:06:30.504 } 00:06:30.504 ]' 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:30.504 /dev/nbd1' 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:30.504 /dev/nbd1' 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:30.504 256+0 records in 00:06:30.504 256+0 records out 00:06:30.504 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00461185 s, 227 MB/s 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:30.504 256+0 records in 00:06:30.504 256+0 records out 00:06:30.504 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241411 s, 43.4 MB/s 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:30.504 256+0 records in 00:06:30.504 256+0 records out 00:06:30.504 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229068 s, 45.8 MB/s 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.504 00:52:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.763 00:52:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.763 00:52:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.763 00:52:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.763 00:52:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.763 00:52:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.763 00:52:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.763 00:52:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.763 00:52:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.763 00:52:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.763 00:52:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.021 00:52:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.021 00:52:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.021 00:52:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.021 00:52:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.021 00:52:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.021 00:52:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.021 00:52:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.021 00:52:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.021 00:52:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.021 00:52:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.280 00:52:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.280 00:52:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.280 00:52:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.280 00:52:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.538 00:52:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.538 00:52:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.538 00:52:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.538 00:52:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.538 00:52:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.538 00:52:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.538 00:52:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.538 00:52:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.538 00:52:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.538 00:52:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.796 00:52:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.055 [2024-07-14 00:52:21.244201] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.055 [2024-07-14 00:52:21.333199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.055 [2024-07-14 00:52:21.333199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.055 [2024-07-14 00:52:21.393174] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.055 [2024-07-14 00:52:21.393266] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:35.344 00:52:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:35.344 00:52:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:35.344 spdk_app_start Round 1 00:06:35.344 00:52:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1015617 /var/tmp/spdk-nbd.sock 00:06:35.344 00:52:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1015617 ']' 00:06:35.344 00:52:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.344 00:52:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.344 00:52:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.344 00:52:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.344 00:52:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.344 00:52:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.344 00:52:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:35.344 00:52:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.344 Malloc0 00:06:35.344 00:52:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.601 Malloc1 00:06:35.601 00:52:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.601 00:52:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.601 00:52:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.601 00:52:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.601 00:52:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.601 00:52:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.601 00:52:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.601 00:52:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.601 00:52:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.601 00:52:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.601 00:52:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.601 00:52:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.602 00:52:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:35.602 00:52:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.602 00:52:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.602 00:52:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:35.860 /dev/nbd0 00:06:35.860 00:52:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:35.860 00:52:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:35.860 00:52:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:35.860 00:52:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:35.860 00:52:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:35.860 00:52:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:35.860 00:52:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:35.860 00:52:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:35.860 00:52:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:35.860 00:52:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:35.860 00:52:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.860 1+0 records in 00:06:35.860 1+0 records out 00:06:35.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000128344 s, 31.9 MB/s 00:06:35.860 00:52:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:35.860 00:52:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:35.860 00:52:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:35.860 00:52:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:35.860 00:52:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:35.860 00:52:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.860 00:52:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.860 00:52:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.118 /dev/nbd1 00:06:36.118 00:52:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.118 00:52:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.118 00:52:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:36.118 00:52:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:36.118 00:52:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:36.118 00:52:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:36.118 00:52:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:36.118 00:52:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:36.118 00:52:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:36.118 00:52:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:36.118 00:52:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.118 1+0 records in 00:06:36.118 1+0 records out 00:06:36.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187944 s, 21.8 MB/s 00:06:36.118 00:52:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.118 00:52:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:36.118 00:52:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:36.118 00:52:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:36.118 00:52:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:36.118 00:52:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.118 00:52:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.118 00:52:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.118 00:52:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.118 00:52:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.376 00:52:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.376 { 00:06:36.376 "nbd_device": "/dev/nbd0", 00:06:36.376 "bdev_name": "Malloc0" 00:06:36.376 }, 00:06:36.376 { 00:06:36.376 "nbd_device": "/dev/nbd1", 00:06:36.376 "bdev_name": "Malloc1" 00:06:36.376 } 00:06:36.376 ]' 00:06:36.376 00:52:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.376 { 00:06:36.376 "nbd_device": "/dev/nbd0", 00:06:36.376 "bdev_name": "Malloc0" 00:06:36.376 }, 00:06:36.376 { 00:06:36.376 "nbd_device": "/dev/nbd1", 00:06:36.376 "bdev_name": "Malloc1" 00:06:36.376 } 00:06:36.376 ]' 00:06:36.376 00:52:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.376 00:52:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.376 /dev/nbd1' 00:06:36.376 00:52:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.376 /dev/nbd1' 00:06:36.376 00:52:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.376 00:52:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.376 00:52:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.377 256+0 records in 00:06:36.377 256+0 records out 00:06:36.377 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00527871 s, 199 MB/s 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.377 256+0 records in 00:06:36.377 256+0 records out 00:06:36.377 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207805 s, 50.5 MB/s 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.377 256+0 records in 00:06:36.377 256+0 records out 00:06:36.377 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250978 s, 41.8 MB/s 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.377 00:52:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.635 00:52:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.635 00:52:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.635 00:52:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.635 00:52:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.635 00:52:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.635 00:52:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.635 00:52:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.635 00:52:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.635 00:52:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.635 00:52:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:36.894 00:52:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:36.894 00:52:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:36.894 00:52:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:36.894 00:52:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.894 00:52:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.894 00:52:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:36.894 00:52:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.894 00:52:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.894 00:52:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.894 00:52:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.894 00:52:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.152 00:52:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.152 00:52:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.152 00:52:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.152 00:52:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.152 00:52:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.152 00:52:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.152 00:52:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:37.152 00:52:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.152 00:52:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.152 00:52:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.152 00:52:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.152 00:52:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.152 00:52:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.410 00:52:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.670 [2024-07-14 00:52:27.036038] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.929 [2024-07-14 00:52:27.126504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.929 [2024-07-14 00:52:27.126509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.929 [2024-07-14 00:52:27.188631] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.929 [2024-07-14 00:52:27.188712] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:40.517 00:52:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:40.517 00:52:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:40.517 spdk_app_start Round 2 00:06:40.517 00:52:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1015617 /var/tmp/spdk-nbd.sock 00:06:40.517 00:52:29 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1015617 ']' 00:06:40.517 00:52:29 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.517 00:52:29 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.517 00:52:29 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.517 00:52:29 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.517 00:52:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:40.774 00:52:30 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.774 00:52:30 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:40.774 00:52:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.031 Malloc0 00:06:41.031 00:52:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.288 Malloc1 00:06:41.288 00:52:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.288 00:52:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.288 00:52:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.288 00:52:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:41.288 00:52:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.288 00:52:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:41.288 00:52:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.288 00:52:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.288 00:52:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.288 00:52:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:41.288 00:52:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.288 00:52:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:41.288 00:52:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:41.288 00:52:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:41.288 00:52:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.288 00:52:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.546 /dev/nbd0 00:06:41.546 00:52:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.546 00:52:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.546 00:52:30 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:41.546 00:52:30 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:41.546 00:52:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:41.546 00:52:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:41.546 00:52:30 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:41.546 00:52:30 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:41.546 00:52:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:41.546 00:52:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:41.546 00:52:30 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.546 1+0 records in 00:06:41.546 1+0 records out 00:06:41.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236947 s, 17.3 MB/s 00:06:41.546 00:52:30 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.546 00:52:30 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:41.546 00:52:30 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.546 00:52:30 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:41.546 00:52:30 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:41.546 00:52:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.546 00:52:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.546 00:52:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.805 /dev/nbd1 00:06:41.805 00:52:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.805 00:52:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.805 00:52:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:41.805 00:52:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:41.805 00:52:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:41.805 00:52:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:41.805 00:52:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:41.805 00:52:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:41.805 00:52:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:41.805 00:52:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:41.805 00:52:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.805 1+0 records in 00:06:41.805 1+0 records out 00:06:41.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016275 s, 25.2 MB/s 00:06:41.805 00:52:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.805 00:52:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:41.805 00:52:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:41.805 00:52:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:41.805 00:52:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:41.805 00:52:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.805 00:52:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.805 00:52:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.805 00:52:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.805 00:52:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:42.063 { 00:06:42.063 "nbd_device": "/dev/nbd0", 00:06:42.063 "bdev_name": "Malloc0" 00:06:42.063 }, 00:06:42.063 { 00:06:42.063 "nbd_device": "/dev/nbd1", 00:06:42.063 "bdev_name": "Malloc1" 00:06:42.063 } 00:06:42.063 ]' 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:42.063 { 00:06:42.063 "nbd_device": "/dev/nbd0", 00:06:42.063 "bdev_name": "Malloc0" 00:06:42.063 }, 00:06:42.063 { 00:06:42.063 "nbd_device": "/dev/nbd1", 00:06:42.063 "bdev_name": "Malloc1" 00:06:42.063 } 00:06:42.063 ]' 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:42.063 /dev/nbd1' 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:42.063 /dev/nbd1' 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:42.063 256+0 records in 00:06:42.063 256+0 records out 00:06:42.063 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050741 s, 207 MB/s 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.063 00:52:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:42.321 256+0 records in 00:06:42.321 256+0 records out 00:06:42.321 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024161 s, 43.4 MB/s 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:42.321 256+0 records in 00:06:42.321 256+0 records out 00:06:42.321 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226297 s, 46.3 MB/s 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.321 00:52:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.579 00:52:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.579 00:52:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.579 00:52:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.579 00:52:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.579 00:52:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.579 00:52:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.579 00:52:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.579 00:52:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.579 00:52:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.579 00:52:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.837 00:52:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.837 00:52:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.837 00:52:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.837 00:52:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.837 00:52:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.837 00:52:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.837 00:52:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.837 00:52:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.837 00:52:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.837 00:52:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.837 00:52:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.095 00:52:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:43.095 00:52:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:43.095 00:52:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.095 00:52:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:43.095 00:52:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:43.095 00:52:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.095 00:52:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:43.095 00:52:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:43.095 00:52:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:43.095 00:52:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:43.095 00:52:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:43.095 00:52:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:43.095 00:52:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:43.355 00:52:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.615 [2024-07-14 00:52:32.875531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.615 [2024-07-14 00:52:32.965435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.615 [2024-07-14 00:52:32.965440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.615 [2024-07-14 00:52:33.027686] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.615 [2024-07-14 00:52:33.027764] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.902 00:52:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1015617 /var/tmp/spdk-nbd.sock 00:06:46.902 00:52:35 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1015617 ']' 00:06:46.902 00:52:35 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.902 00:52:35 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.902 00:52:35 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.902 00:52:35 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.902 00:52:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.902 00:52:35 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.902 00:52:35 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:46.902 00:52:35 event.app_repeat -- event/event.sh@39 -- # killprocess 1015617 00:06:46.902 00:52:35 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1015617 ']' 00:06:46.902 00:52:35 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1015617 00:06:46.902 00:52:35 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:46.902 00:52:35 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.902 00:52:35 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1015617 00:06:46.902 00:52:35 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.902 00:52:35 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.902 00:52:35 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1015617' 00:06:46.902 killing process with pid 1015617 00:06:46.902 00:52:35 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1015617 00:06:46.902 00:52:35 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1015617 00:06:46.902 spdk_app_start is called in Round 0. 00:06:46.902 Shutdown signal received, stop current app iteration 00:06:46.902 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:06:46.902 spdk_app_start is called in Round 1. 00:06:46.902 Shutdown signal received, stop current app iteration 00:06:46.902 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:06:46.902 spdk_app_start is called in Round 2. 00:06:46.902 Shutdown signal received, stop current app iteration 00:06:46.902 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:06:46.902 spdk_app_start is called in Round 3. 00:06:46.902 Shutdown signal received, stop current app iteration 00:06:46.902 00:52:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:46.902 00:52:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:46.902 00:06:46.902 real 0m17.996s 00:06:46.902 user 0m39.284s 00:06:46.902 sys 0m3.134s 00:06:46.902 00:52:36 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.902 00:52:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.902 ************************************ 00:06:46.902 END TEST app_repeat 00:06:46.902 ************************************ 00:06:46.902 00:52:36 event -- common/autotest_common.sh@1142 -- # return 0 00:06:46.902 00:52:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:46.902 00:52:36 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:46.902 00:52:36 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.902 00:52:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.902 00:52:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.902 ************************************ 00:06:46.902 START TEST cpu_locks 00:06:46.902 ************************************ 00:06:46.902 00:52:36 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:46.902 * Looking for test storage... 00:06:46.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:46.902 00:52:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:46.902 00:52:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:46.902 00:52:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:46.903 00:52:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:46.903 00:52:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.903 00:52:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.903 00:52:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.903 ************************************ 00:06:46.903 START TEST default_locks 00:06:46.903 ************************************ 00:06:46.903 00:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:46.903 00:52:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1017969 00:06:46.903 00:52:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.903 00:52:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1017969 00:06:46.903 00:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1017969 ']' 00:06:46.903 00:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.903 00:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.903 00:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.903 00:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.903 00:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.162 [2024-07-14 00:52:36.320191] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:47.162 [2024-07-14 00:52:36.320293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1017969 ] 00:06:47.162 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.162 [2024-07-14 00:52:36.382685] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.162 [2024-07-14 00:52:36.466327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.423 00:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.423 00:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:47.423 00:52:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1017969 00:06:47.423 00:52:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1017969 00:06:47.423 00:52:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.684 lslocks: write error 00:06:47.684 00:52:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1017969 00:06:47.684 00:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1017969 ']' 00:06:47.684 00:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1017969 00:06:47.684 00:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:47.684 00:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.684 00:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1017969 00:06:47.684 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.684 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.684 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1017969' 00:06:47.684 killing process with pid 1017969 00:06:47.684 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1017969 00:06:47.684 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1017969 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1017969 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1017969 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1017969 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1017969 ']' 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1017969) - No such process 00:06:48.256 ERROR: process (pid: 1017969) is no longer running 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:48.256 00:06:48.256 real 0m1.162s 00:06:48.256 user 0m1.068s 00:06:48.256 sys 0m0.541s 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.256 00:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.256 ************************************ 00:06:48.256 END TEST default_locks 00:06:48.256 ************************************ 00:06:48.256 00:52:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:48.256 00:52:37 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:48.256 00:52:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.256 00:52:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.256 00:52:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.256 ************************************ 00:06:48.256 START TEST default_locks_via_rpc 00:06:48.256 ************************************ 00:06:48.256 00:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:48.256 00:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1018131 00:06:48.256 00:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.256 00:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1018131 00:06:48.256 00:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1018131 ']' 00:06:48.256 00:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.256 00:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.256 00:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.256 00:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.256 00:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.256 [2024-07-14 00:52:37.531982] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:48.256 [2024-07-14 00:52:37.532076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018131 ] 00:06:48.256 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.256 [2024-07-14 00:52:37.586977] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.515 [2024-07-14 00:52:37.677024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.774 00:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.774 00:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:48.774 00:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:48.774 00:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.774 00:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.774 00:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.774 00:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:48.774 00:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:48.774 00:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:48.774 00:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:48.774 00:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:48.774 00:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.774 00:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.774 00:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.774 00:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1018131 00:06:48.774 00:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1018131 00:06:48.774 00:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.774 00:52:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1018131 00:06:48.774 00:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1018131 ']' 00:06:48.774 00:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1018131 00:06:48.774 00:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:48.774 00:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.774 00:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1018131 00:06:49.034 00:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.034 00:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.034 00:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1018131' 00:06:49.034 killing process with pid 1018131 00:06:49.035 00:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1018131 00:06:49.035 00:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1018131 00:06:49.295 00:06:49.295 real 0m1.138s 00:06:49.295 user 0m1.105s 00:06:49.295 sys 0m0.495s 00:06:49.295 00:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.295 00:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.295 ************************************ 00:06:49.295 END TEST default_locks_via_rpc 00:06:49.295 ************************************ 00:06:49.295 00:52:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:49.295 00:52:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:49.295 00:52:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.295 00:52:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.295 00:52:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.295 ************************************ 00:06:49.295 START TEST non_locking_app_on_locked_coremask 00:06:49.295 ************************************ 00:06:49.295 00:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:49.295 00:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1018293 00:06:49.295 00:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.295 00:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1018293 /var/tmp/spdk.sock 00:06:49.295 00:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1018293 ']' 00:06:49.295 00:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.295 00:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.295 00:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.295 00:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.295 00:52:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.553 [2024-07-14 00:52:38.712905] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:49.553 [2024-07-14 00:52:38.713021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018293 ] 00:06:49.553 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.553 [2024-07-14 00:52:38.770204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.553 [2024-07-14 00:52:38.857657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.811 00:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.811 00:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:49.811 00:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1018415 00:06:49.812 00:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:49.812 00:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1018415 /var/tmp/spdk2.sock 00:06:49.812 00:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1018415 ']' 00:06:49.812 00:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.812 00:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.812 00:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.812 00:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.812 00:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.812 [2024-07-14 00:52:39.163734] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:49.812 [2024-07-14 00:52:39.163823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018415 ] 00:06:49.812 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.071 [2024-07-14 00:52:39.256752] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.071 [2024-07-14 00:52:39.256789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.071 [2024-07-14 00:52:39.446707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.008 00:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.008 00:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:51.008 00:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1018293 00:06:51.008 00:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1018293 00:06:51.008 00:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.268 lslocks: write error 00:06:51.268 00:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1018293 00:06:51.268 00:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1018293 ']' 00:06:51.268 00:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1018293 00:06:51.268 00:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:51.268 00:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.268 00:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1018293 00:06:51.268 00:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.268 00:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.268 00:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1018293' 00:06:51.268 killing process with pid 1018293 00:06:51.268 00:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1018293 00:06:51.268 00:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1018293 00:06:52.203 00:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1018415 00:06:52.203 00:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1018415 ']' 00:06:52.203 00:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1018415 00:06:52.203 00:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:52.203 00:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.203 00:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1018415 00:06:52.203 00:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.203 00:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.203 00:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1018415' 00:06:52.203 killing process with pid 1018415 00:06:52.203 00:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1018415 00:06:52.203 00:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1018415 00:06:52.461 00:06:52.461 real 0m3.126s 00:06:52.461 user 0m3.253s 00:06:52.461 sys 0m1.048s 00:06:52.461 00:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.461 00:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.461 ************************************ 00:06:52.461 END TEST non_locking_app_on_locked_coremask 00:06:52.461 ************************************ 00:06:52.461 00:52:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:52.461 00:52:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:52.461 00:52:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.461 00:52:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.461 00:52:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.461 ************************************ 00:06:52.461 START TEST locking_app_on_unlocked_coremask 00:06:52.461 ************************************ 00:06:52.461 00:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:52.461 00:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1018727 00:06:52.461 00:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:52.461 00:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1018727 /var/tmp/spdk.sock 00:06:52.461 00:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1018727 ']' 00:06:52.461 00:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.461 00:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.462 00:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.462 00:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.462 00:52:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.721 [2024-07-14 00:52:41.896447] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:52.722 [2024-07-14 00:52:41.896533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018727 ] 00:06:52.722 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.722 [2024-07-14 00:52:41.958575] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.722 [2024-07-14 00:52:41.958613] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.722 [2024-07-14 00:52:42.047272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.981 00:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.981 00:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:52.981 00:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1018758 00:06:52.981 00:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.981 00:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1018758 /var/tmp/spdk2.sock 00:06:52.981 00:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1018758 ']' 00:06:52.981 00:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.981 00:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.981 00:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.981 00:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.981 00:52:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.981 [2024-07-14 00:52:42.356017] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:52.981 [2024-07-14 00:52:42.356105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018758 ] 00:06:52.981 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.239 [2024-07-14 00:52:42.446018] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.239 [2024-07-14 00:52:42.622140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.173 00:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.173 00:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:54.173 00:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1018758 00:06:54.173 00:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1018758 00:06:54.173 00:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.432 lslocks: write error 00:06:54.432 00:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1018727 00:06:54.432 00:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1018727 ']' 00:06:54.432 00:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1018727 00:06:54.432 00:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:54.432 00:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.432 00:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1018727 00:06:54.432 00:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.432 00:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.432 00:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1018727' 00:06:54.432 killing process with pid 1018727 00:06:54.432 00:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1018727 00:06:54.432 00:52:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1018727 00:06:55.372 00:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1018758 00:06:55.372 00:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1018758 ']' 00:06:55.372 00:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1018758 00:06:55.372 00:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:55.372 00:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.372 00:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1018758 00:06:55.372 00:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.372 00:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.372 00:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1018758' 00:06:55.372 killing process with pid 1018758 00:06:55.372 00:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1018758 00:06:55.372 00:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1018758 00:06:55.942 00:06:55.942 real 0m3.250s 00:06:55.942 user 0m3.384s 00:06:55.942 sys 0m1.091s 00:06:55.942 00:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.942 00:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.942 ************************************ 00:06:55.942 END TEST locking_app_on_unlocked_coremask 00:06:55.942 ************************************ 00:06:55.942 00:52:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:55.942 00:52:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:55.942 00:52:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.942 00:52:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.942 00:52:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.942 ************************************ 00:06:55.942 START TEST locking_app_on_locked_coremask 00:06:55.942 ************************************ 00:06:55.942 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:55.942 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1019161 00:06:55.942 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.942 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1019161 /var/tmp/spdk.sock 00:06:55.942 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1019161 ']' 00:06:55.942 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.942 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.942 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.942 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.942 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.942 [2024-07-14 00:52:45.200298] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:55.942 [2024-07-14 00:52:45.200387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019161 ] 00:06:55.942 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.942 [2024-07-14 00:52:45.262844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.942 [2024-07-14 00:52:45.351254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.251 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.251 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:56.251 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1019170 00:06:56.251 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1019170 /var/tmp/spdk2.sock 00:06:56.251 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:56.251 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:56.251 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1019170 /var/tmp/spdk2.sock 00:06:56.251 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:56.251 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.251 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:56.251 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.251 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1019170 /var/tmp/spdk2.sock 00:06:56.251 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1019170 ']' 00:06:56.251 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.251 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.251 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.251 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.251 00:52:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.509 [2024-07-14 00:52:45.671309] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:56.509 [2024-07-14 00:52:45.671408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019170 ] 00:06:56.509 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.509 [2024-07-14 00:52:45.762860] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1019161 has claimed it. 00:06:56.509 [2024-07-14 00:52:45.762917] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:57.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1019170) - No such process 00:06:57.077 ERROR: process (pid: 1019170) is no longer running 00:06:57.077 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.077 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:57.077 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:57.077 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.077 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:57.077 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.077 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1019161 00:06:57.077 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1019161 00:06:57.077 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.336 lslocks: write error 00:06:57.336 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1019161 00:06:57.336 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1019161 ']' 00:06:57.336 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1019161 00:06:57.336 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:57.336 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.336 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1019161 00:06:57.596 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.596 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.596 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1019161' 00:06:57.596 killing process with pid 1019161 00:06:57.596 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1019161 00:06:57.596 00:52:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1019161 00:06:57.854 00:06:57.854 real 0m2.025s 00:06:57.854 user 0m2.171s 00:06:57.854 sys 0m0.656s 00:06:57.854 00:52:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.854 00:52:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.854 ************************************ 00:06:57.854 END TEST locking_app_on_locked_coremask 00:06:57.854 ************************************ 00:06:57.854 00:52:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:57.854 00:52:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:57.854 00:52:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.854 00:52:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.854 00:52:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.854 ************************************ 00:06:57.854 START TEST locking_overlapped_coremask 00:06:57.854 ************************************ 00:06:57.854 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:57.854 00:52:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1019460 00:06:57.854 00:52:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:57.854 00:52:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1019460 /var/tmp/spdk.sock 00:06:57.854 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1019460 ']' 00:06:57.854 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.854 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.854 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.854 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.854 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.113 [2024-07-14 00:52:47.270390] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:58.113 [2024-07-14 00:52:47.270478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019460 ] 00:06:58.113 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.113 [2024-07-14 00:52:47.331394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.113 [2024-07-14 00:52:47.425036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.113 [2024-07-14 00:52:47.425090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.113 [2024-07-14 00:52:47.425093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.372 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.372 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:58.372 00:52:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1019472 00:06:58.372 00:52:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1019472 /var/tmp/spdk2.sock 00:06:58.372 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:58.372 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1019472 /var/tmp/spdk2.sock 00:06:58.372 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:58.372 00:52:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:58.372 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.372 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:58.372 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.372 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1019472 /var/tmp/spdk2.sock 00:06:58.372 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1019472 ']' 00:06:58.372 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.372 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.372 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.372 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.372 00:52:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.372 [2024-07-14 00:52:47.726104] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:58.372 [2024-07-14 00:52:47.726192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019472 ] 00:06:58.372 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.631 [2024-07-14 00:52:47.814259] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1019460 has claimed it. 00:06:58.631 [2024-07-14 00:52:47.814321] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:59.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1019472) - No such process 00:06:59.199 ERROR: process (pid: 1019472) is no longer running 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1019460 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1019460 ']' 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1019460 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1019460 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1019460' 00:06:59.199 killing process with pid 1019460 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1019460 00:06:59.199 00:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1019460 00:06:59.459 00:06:59.459 real 0m1.639s 00:06:59.459 user 0m4.402s 00:06:59.460 sys 0m0.466s 00:06:59.460 00:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.460 00:52:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.460 ************************************ 00:06:59.460 END TEST locking_overlapped_coremask 00:06:59.460 ************************************ 00:06:59.719 00:52:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:59.719 00:52:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:59.719 00:52:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.719 00:52:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.719 00:52:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.719 ************************************ 00:06:59.719 START TEST locking_overlapped_coremask_via_rpc 00:06:59.719 ************************************ 00:06:59.719 00:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:59.719 00:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1019634 00:06:59.719 00:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:59.719 00:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1019634 /var/tmp/spdk.sock 00:06:59.719 00:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1019634 ']' 00:06:59.719 00:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.719 00:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.719 00:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.719 00:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.719 00:52:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.719 [2024-07-14 00:52:48.953506] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:59.719 [2024-07-14 00:52:48.953602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019634 ] 00:06:59.719 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.719 [2024-07-14 00:52:49.024802] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.719 [2024-07-14 00:52:49.024862] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.719 [2024-07-14 00:52:49.126477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.719 [2024-07-14 00:52:49.126551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.719 [2024-07-14 00:52:49.126542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.287 00:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.287 00:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:00.287 00:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1019762 00:07:00.287 00:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1019762 /var/tmp/spdk2.sock 00:07:00.287 00:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1019762 ']' 00:07:00.287 00:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:00.287 00:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.287 00:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.287 00:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.287 00:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.287 00:52:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.287 [2024-07-14 00:52:49.447742] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:00.287 [2024-07-14 00:52:49.447827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019762 ] 00:07:00.287 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.287 [2024-07-14 00:52:49.542885] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.287 [2024-07-14 00:52:49.542924] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.547 [2024-07-14 00:52:49.720146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.547 [2024-07-14 00:52:49.720208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:00.547 [2024-07-14 00:52:49.720209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.114 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.114 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:01.114 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:01.114 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.114 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.114 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.114 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.114 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:01.114 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.114 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:01.114 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.114 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:01.114 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.114 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.114 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.114 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.114 [2024-07-14 00:52:50.415971] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1019634 has claimed it. 00:07:01.114 request: 00:07:01.114 { 00:07:01.114 "method": "framework_enable_cpumask_locks", 00:07:01.115 "req_id": 1 00:07:01.115 } 00:07:01.115 Got JSON-RPC error response 00:07:01.115 response: 00:07:01.115 { 00:07:01.115 "code": -32603, 00:07:01.115 "message": "Failed to claim CPU core: 2" 00:07:01.115 } 00:07:01.115 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:01.115 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:01.115 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.115 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:01.115 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.115 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1019634 /var/tmp/spdk.sock 00:07:01.115 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1019634 ']' 00:07:01.115 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.115 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.115 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.115 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.115 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.373 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.373 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:01.373 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1019762 /var/tmp/spdk2.sock 00:07:01.373 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1019762 ']' 00:07:01.373 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.373 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.373 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.373 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.373 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.633 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.633 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:01.633 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:01.633 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:01.633 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:01.633 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:01.633 00:07:01.633 real 0m2.028s 00:07:01.633 user 0m1.159s 00:07:01.633 sys 0m0.182s 00:07:01.633 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.633 00:52:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.633 ************************************ 00:07:01.633 END TEST locking_overlapped_coremask_via_rpc 00:07:01.633 ************************************ 00:07:01.633 00:52:50 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:01.633 00:52:50 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:01.633 00:52:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1019634 ]] 00:07:01.633 00:52:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1019634 00:07:01.633 00:52:50 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1019634 ']' 00:07:01.633 00:52:50 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1019634 00:07:01.633 00:52:50 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:01.633 00:52:50 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.633 00:52:50 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1019634 00:07:01.633 00:52:50 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.633 00:52:50 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.633 00:52:50 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1019634' 00:07:01.633 killing process with pid 1019634 00:07:01.633 00:52:50 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1019634 00:07:01.633 00:52:50 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1019634 00:07:02.201 00:52:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1019762 ]] 00:07:02.201 00:52:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1019762 00:07:02.201 00:52:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1019762 ']' 00:07:02.201 00:52:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1019762 00:07:02.201 00:52:51 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:02.201 00:52:51 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.201 00:52:51 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1019762 00:07:02.201 00:52:51 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:02.201 00:52:51 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:02.201 00:52:51 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1019762' 00:07:02.201 killing process with pid 1019762 00:07:02.201 00:52:51 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1019762 00:07:02.201 00:52:51 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1019762 00:07:02.461 00:52:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:02.461 00:52:51 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:02.461 00:52:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1019634 ]] 00:07:02.461 00:52:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1019634 00:07:02.461 00:52:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1019634 ']' 00:07:02.461 00:52:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1019634 00:07:02.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1019634) - No such process 00:07:02.461 00:52:51 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1019634 is not found' 00:07:02.461 Process with pid 1019634 is not found 00:07:02.461 00:52:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1019762 ]] 00:07:02.461 00:52:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1019762 00:07:02.461 00:52:51 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1019762 ']' 00:07:02.461 00:52:51 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1019762 00:07:02.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1019762) - No such process 00:07:02.461 00:52:51 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1019762 is not found' 00:07:02.461 Process with pid 1019762 is not found 00:07:02.461 00:52:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:02.461 00:07:02.461 real 0m15.627s 00:07:02.461 user 0m27.480s 00:07:02.461 sys 0m5.396s 00:07:02.461 00:52:51 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.461 00:52:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.461 ************************************ 00:07:02.461 END TEST cpu_locks 00:07:02.461 ************************************ 00:07:02.461 00:52:51 event -- common/autotest_common.sh@1142 -- # return 0 00:07:02.461 00:07:02.461 real 0m39.421s 00:07:02.461 user 1m15.545s 00:07:02.461 sys 0m9.339s 00:07:02.461 00:52:51 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.461 00:52:51 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.461 ************************************ 00:07:02.461 END TEST event 00:07:02.461 ************************************ 00:07:02.461 00:52:51 -- common/autotest_common.sh@1142 -- # return 0 00:07:02.461 00:52:51 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:02.461 00:52:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.461 00:52:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.461 00:52:51 -- common/autotest_common.sh@10 -- # set +x 00:07:02.720 ************************************ 00:07:02.720 START TEST thread 00:07:02.720 ************************************ 00:07:02.720 00:52:51 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:02.720 * Looking for test storage... 00:07:02.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:02.720 00:52:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:02.720 00:52:51 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:02.720 00:52:51 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.720 00:52:51 thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.720 ************************************ 00:07:02.720 START TEST thread_poller_perf 00:07:02.720 ************************************ 00:07:02.720 00:52:51 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:02.720 [2024-07-14 00:52:51.965635] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:02.720 [2024-07-14 00:52:51.965700] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020133 ] 00:07:02.720 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.720 [2024-07-14 00:52:52.030714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.720 [2024-07-14 00:52:52.119979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.720 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:04.102 ====================================== 00:07:04.103 busy:2713775734 (cyc) 00:07:04.103 total_run_count: 292000 00:07:04.103 tsc_hz: 2700000000 (cyc) 00:07:04.103 ====================================== 00:07:04.103 poller_cost: 9293 (cyc), 3441 (nsec) 00:07:04.103 00:07:04.103 real 0m1.258s 00:07:04.103 user 0m1.166s 00:07:04.103 sys 0m0.086s 00:07:04.103 00:52:53 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.103 00:52:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:04.103 ************************************ 00:07:04.103 END TEST thread_poller_perf 00:07:04.103 ************************************ 00:07:04.103 00:52:53 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:04.103 00:52:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:04.103 00:52:53 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:04.103 00:52:53 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.103 00:52:53 thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.103 ************************************ 00:07:04.103 START TEST thread_poller_perf 00:07:04.103 ************************************ 00:07:04.103 00:52:53 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:04.103 [2024-07-14 00:52:53.277762] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:04.103 [2024-07-14 00:52:53.277833] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020287 ] 00:07:04.103 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.103 [2024-07-14 00:52:53.342535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.103 [2024-07-14 00:52:53.436491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.103 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:05.481 ====================================== 00:07:05.481 busy:2702695705 (cyc) 00:07:05.481 total_run_count: 3863000 00:07:05.481 tsc_hz: 2700000000 (cyc) 00:07:05.481 ====================================== 00:07:05.481 poller_cost: 699 (cyc), 258 (nsec) 00:07:05.481 00:07:05.481 real 0m1.257s 00:07:05.481 user 0m1.166s 00:07:05.481 sys 0m0.085s 00:07:05.481 00:52:54 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.481 00:52:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:05.481 ************************************ 00:07:05.481 END TEST thread_poller_perf 00:07:05.481 ************************************ 00:07:05.481 00:52:54 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:05.481 00:52:54 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:05.481 00:07:05.481 real 0m2.666s 00:07:05.481 user 0m2.390s 00:07:05.481 sys 0m0.275s 00:07:05.481 00:52:54 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.481 00:52:54 thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.481 ************************************ 00:07:05.481 END TEST thread 00:07:05.481 ************************************ 00:07:05.481 00:52:54 -- common/autotest_common.sh@1142 -- # return 0 00:07:05.481 00:52:54 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:05.481 00:52:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.481 00:52:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.481 00:52:54 -- common/autotest_common.sh@10 -- # set +x 00:07:05.481 ************************************ 00:07:05.481 START TEST accel 00:07:05.481 ************************************ 00:07:05.481 00:52:54 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:05.481 * Looking for test storage... 00:07:05.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:05.481 00:52:54 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:05.481 00:52:54 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:05.481 00:52:54 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:05.481 00:52:54 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1020484 00:07:05.481 00:52:54 accel -- accel/accel.sh@63 -- # waitforlisten 1020484 00:07:05.481 00:52:54 accel -- common/autotest_common.sh@829 -- # '[' -z 1020484 ']' 00:07:05.481 00:52:54 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.481 00:52:54 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:05.481 00:52:54 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.481 00:52:54 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.481 00:52:54 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.481 00:52:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.481 00:52:54 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:05.481 00:52:54 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.481 00:52:54 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.481 00:52:54 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.481 00:52:54 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.481 00:52:54 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.481 00:52:54 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:05.481 00:52:54 accel -- accel/accel.sh@41 -- # jq -r . 00:07:05.481 [2024-07-14 00:52:54.694296] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:05.481 [2024-07-14 00:52:54.694389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020484 ] 00:07:05.481 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.481 [2024-07-14 00:52:54.754118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.481 [2024-07-14 00:52:54.838725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.740 00:52:55 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.740 00:52:55 accel -- common/autotest_common.sh@862 -- # return 0 00:07:05.740 00:52:55 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:05.740 00:52:55 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:05.740 00:52:55 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:05.740 00:52:55 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:05.740 00:52:55 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:05.740 00:52:55 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:05.740 00:52:55 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.740 00:52:55 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:05.740 00:52:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.740 00:52:55 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.740 00:52:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.740 00:52:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.740 00:52:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.740 00:52:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.740 00:52:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.740 00:52:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.740 00:52:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.740 00:52:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.740 00:52:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.740 00:52:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.740 00:52:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.740 00:52:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.740 00:52:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.740 00:52:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.740 00:52:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.740 00:52:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.740 00:52:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.740 00:52:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.740 00:52:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.740 00:52:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.740 00:52:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.740 00:52:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.740 00:52:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.740 00:52:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.741 00:52:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.741 00:52:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.741 00:52:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.741 00:52:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.741 00:52:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.741 00:52:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.741 00:52:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.741 00:52:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.741 00:52:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.741 00:52:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:05.741 00:52:55 accel -- accel/accel.sh@72 -- # IFS== 00:07:05.741 00:52:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:05.741 00:52:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:05.741 00:52:55 accel -- accel/accel.sh@75 -- # killprocess 1020484 00:07:05.741 00:52:55 accel -- common/autotest_common.sh@948 -- # '[' -z 1020484 ']' 00:07:05.741 00:52:55 accel -- common/autotest_common.sh@952 -- # kill -0 1020484 00:07:05.741 00:52:55 accel -- common/autotest_common.sh@953 -- # uname 00:07:05.741 00:52:55 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.741 00:52:55 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1020484 00:07:06.001 00:52:55 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.001 00:52:55 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.001 00:52:55 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1020484' 00:07:06.001 killing process with pid 1020484 00:07:06.001 00:52:55 accel -- common/autotest_common.sh@967 -- # kill 1020484 00:07:06.001 00:52:55 accel -- common/autotest_common.sh@972 -- # wait 1020484 00:07:06.261 00:52:55 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:06.261 00:52:55 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:06.261 00:52:55 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:06.261 00:52:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.261 00:52:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.261 00:52:55 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:06.261 00:52:55 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:06.261 00:52:55 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:06.261 00:52:55 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.261 00:52:55 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.261 00:52:55 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.261 00:52:55 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.261 00:52:55 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.261 00:52:55 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:06.261 00:52:55 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:06.261 00:52:55 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.261 00:52:55 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:06.261 00:52:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.261 00:52:55 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:06.261 00:52:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:06.261 00:52:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.261 00:52:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.261 ************************************ 00:07:06.261 START TEST accel_missing_filename 00:07:06.261 ************************************ 00:07:06.261 00:52:55 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:06.261 00:52:55 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:06.261 00:52:55 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:06.261 00:52:55 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:06.261 00:52:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.261 00:52:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:06.261 00:52:55 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.261 00:52:55 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:06.261 00:52:55 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:06.261 00:52:55 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:06.261 00:52:55 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.261 00:52:55 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.261 00:52:55 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.261 00:52:55 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.261 00:52:55 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.261 00:52:55 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:06.261 00:52:55 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:06.261 [2024-07-14 00:52:55.675009] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:06.261 [2024-07-14 00:52:55.675071] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020654 ] 00:07:06.521 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.521 [2024-07-14 00:52:55.737709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.521 [2024-07-14 00:52:55.831061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.521 [2024-07-14 00:52:55.889986] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.781 [2024-07-14 00:52:55.969373] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:06.781 A filename is required. 00:07:06.781 00:52:56 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:06.781 00:52:56 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:06.781 00:52:56 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:06.781 00:52:56 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:06.781 00:52:56 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:06.781 00:52:56 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:06.781 00:07:06.781 real 0m0.395s 00:07:06.781 user 0m0.289s 00:07:06.781 sys 0m0.140s 00:07:06.781 00:52:56 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.781 00:52:56 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:06.781 ************************************ 00:07:06.781 END TEST accel_missing_filename 00:07:06.781 ************************************ 00:07:06.781 00:52:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.781 00:52:56 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.781 00:52:56 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:06.781 00:52:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.781 00:52:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.781 ************************************ 00:07:06.781 START TEST accel_compress_verify 00:07:06.781 ************************************ 00:07:06.781 00:52:56 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.781 00:52:56 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:06.781 00:52:56 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.781 00:52:56 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:06.781 00:52:56 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.781 00:52:56 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:06.781 00:52:56 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.781 00:52:56 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.781 00:52:56 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:06.781 00:52:56 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:06.781 00:52:56 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.781 00:52:56 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.781 00:52:56 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.781 00:52:56 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.781 00:52:56 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.781 00:52:56 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:06.781 00:52:56 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:06.781 [2024-07-14 00:52:56.116468] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:06.781 [2024-07-14 00:52:56.116534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020790 ] 00:07:06.781 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.781 [2024-07-14 00:52:56.178830] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.042 [2024-07-14 00:52:56.272259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.042 [2024-07-14 00:52:56.329669] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.042 [2024-07-14 00:52:56.410288] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:07.304 00:07:07.304 Compression does not support the verify option, aborting. 00:07:07.304 00:52:56 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:07.304 00:52:56 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.304 00:52:56 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:07.304 00:52:56 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:07.304 00:52:56 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:07.304 00:52:56 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.304 00:07:07.304 real 0m0.393s 00:07:07.304 user 0m0.288s 00:07:07.304 sys 0m0.139s 00:07:07.304 00:52:56 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.304 00:52:56 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:07.304 ************************************ 00:07:07.304 END TEST accel_compress_verify 00:07:07.304 ************************************ 00:07:07.304 00:52:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.304 00:52:56 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:07.304 00:52:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:07.304 00:52:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.304 00:52:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.304 ************************************ 00:07:07.304 START TEST accel_wrong_workload 00:07:07.304 ************************************ 00:07:07.304 00:52:56 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:07.304 00:52:56 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:07.304 00:52:56 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:07.304 00:52:56 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:07.304 00:52:56 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.304 00:52:56 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:07.304 00:52:56 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.304 00:52:56 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:07.304 00:52:56 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:07.304 00:52:56 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:07.304 00:52:56 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.304 00:52:56 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.304 00:52:56 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.304 00:52:56 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.304 00:52:56 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.304 00:52:56 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:07.304 00:52:56 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:07.304 Unsupported workload type: foobar 00:07:07.304 [2024-07-14 00:52:56.554563] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:07.304 accel_perf options: 00:07:07.304 [-h help message] 00:07:07.304 [-q queue depth per core] 00:07:07.304 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:07.304 [-T number of threads per core 00:07:07.304 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:07.304 [-t time in seconds] 00:07:07.304 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:07.304 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:07.304 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:07.304 [-l for compress/decompress workloads, name of uncompressed input file 00:07:07.304 [-S for crc32c workload, use this seed value (default 0) 00:07:07.304 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:07.304 [-f for fill workload, use this BYTE value (default 255) 00:07:07.304 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:07.304 [-y verify result if this switch is on] 00:07:07.304 [-a tasks to allocate per core (default: same value as -q)] 00:07:07.304 Can be used to spread operations across a wider range of memory. 00:07:07.304 00:52:56 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:07.304 00:52:56 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.304 00:52:56 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:07.304 00:52:56 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.304 00:07:07.304 real 0m0.022s 00:07:07.304 user 0m0.011s 00:07:07.304 sys 0m0.011s 00:07:07.304 00:52:56 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.304 00:52:56 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:07.304 ************************************ 00:07:07.304 END TEST accel_wrong_workload 00:07:07.304 ************************************ 00:07:07.304 Error: writing output failed: Broken pipe 00:07:07.304 00:52:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.304 00:52:56 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:07.304 00:52:56 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:07.304 00:52:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.304 00:52:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.304 ************************************ 00:07:07.304 START TEST accel_negative_buffers 00:07:07.304 ************************************ 00:07:07.304 00:52:56 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:07.304 00:52:56 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:07.304 00:52:56 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:07.304 00:52:56 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:07.304 00:52:56 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.304 00:52:56 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:07.304 00:52:56 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.304 00:52:56 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:07.304 00:52:56 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:07.304 00:52:56 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:07.304 00:52:56 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.304 00:52:56 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.304 00:52:56 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.304 00:52:56 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.304 00:52:56 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.304 00:52:56 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:07.304 00:52:56 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:07.304 -x option must be non-negative. 00:07:07.304 [2024-07-14 00:52:56.624371] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:07.304 accel_perf options: 00:07:07.304 [-h help message] 00:07:07.304 [-q queue depth per core] 00:07:07.304 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:07.304 [-T number of threads per core 00:07:07.304 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:07.304 [-t time in seconds] 00:07:07.304 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:07.304 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:07.304 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:07.304 [-l for compress/decompress workloads, name of uncompressed input file 00:07:07.304 [-S for crc32c workload, use this seed value (default 0) 00:07:07.304 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:07.304 [-f for fill workload, use this BYTE value (default 255) 00:07:07.304 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:07.304 [-y verify result if this switch is on] 00:07:07.304 [-a tasks to allocate per core (default: same value as -q)] 00:07:07.304 Can be used to spread operations across a wider range of memory. 00:07:07.304 00:52:56 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:07.304 00:52:56 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.304 00:52:56 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:07.304 00:52:56 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.304 00:07:07.304 real 0m0.023s 00:07:07.304 user 0m0.011s 00:07:07.304 sys 0m0.012s 00:07:07.304 00:52:56 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.304 00:52:56 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:07.304 ************************************ 00:07:07.304 END TEST accel_negative_buffers 00:07:07.304 ************************************ 00:07:07.304 Error: writing output failed: Broken pipe 00:07:07.304 00:52:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.304 00:52:56 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:07.304 00:52:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:07.304 00:52:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.304 00:52:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.304 ************************************ 00:07:07.304 START TEST accel_crc32c 00:07:07.304 ************************************ 00:07:07.304 00:52:56 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:07.304 00:52:56 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:07.304 00:52:56 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:07.304 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.304 00:52:56 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:07.304 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.305 00:52:56 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:07.305 00:52:56 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:07.305 00:52:56 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.305 00:52:56 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.305 00:52:56 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.305 00:52:56 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.305 00:52:56 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.305 00:52:56 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:07.305 00:52:56 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:07.305 [2024-07-14 00:52:56.685611] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:07.305 [2024-07-14 00:52:56.685677] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020865 ] 00:07:07.305 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.564 [2024-07-14 00:52:56.748093] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.564 [2024-07-14 00:52:56.841929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.564 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 00:52:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:08.968 00:52:58 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.968 00:07:08.968 real 0m1.412s 00:07:08.968 user 0m1.269s 00:07:08.968 sys 0m0.146s 00:07:08.968 00:52:58 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.968 00:52:58 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:08.968 ************************************ 00:07:08.968 END TEST accel_crc32c 00:07:08.968 ************************************ 00:07:08.968 00:52:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.968 00:52:58 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:08.968 00:52:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:08.968 00:52:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.968 00:52:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.968 ************************************ 00:07:08.968 START TEST accel_crc32c_C2 00:07:08.968 ************************************ 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:08.968 [2024-07-14 00:52:58.148363] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:08.968 [2024-07-14 00:52:58.148431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021051 ] 00:07:08.968 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.968 [2024-07-14 00:52:58.212539] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.968 [2024-07-14 00:52:58.306013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.968 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:08.969 00:52:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.380 00:07:10.380 real 0m1.416s 00:07:10.380 user 0m1.268s 00:07:10.380 sys 0m0.150s 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.380 00:52:59 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:10.380 ************************************ 00:07:10.380 END TEST accel_crc32c_C2 00:07:10.380 ************************************ 00:07:10.380 00:52:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.380 00:52:59 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:10.380 00:52:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:10.380 00:52:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.380 00:52:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.380 ************************************ 00:07:10.380 START TEST accel_copy 00:07:10.380 ************************************ 00:07:10.380 00:52:59 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:10.380 00:52:59 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:10.380 00:52:59 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:10.380 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.380 00:52:59 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:10.380 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.380 00:52:59 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:10.380 00:52:59 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:10.380 00:52:59 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.380 00:52:59 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.380 00:52:59 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.380 00:52:59 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.380 00:52:59 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.380 00:52:59 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:10.380 00:52:59 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:10.380 [2024-07-14 00:52:59.608798] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:10.380 [2024-07-14 00:52:59.608878] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021290 ] 00:07:10.380 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.380 [2024-07-14 00:52:59.671666] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.380 [2024-07-14 00:52:59.766956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.638 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.639 00:52:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:11.578 00:53:00 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.578 00:07:11.578 real 0m1.390s 00:07:11.578 user 0m1.245s 00:07:11.578 sys 0m0.146s 00:07:11.578 00:53:00 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.578 00:53:00 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:11.578 ************************************ 00:07:11.578 END TEST accel_copy 00:07:11.578 ************************************ 00:07:11.838 00:53:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.838 00:53:01 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:11.838 00:53:01 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:11.838 00:53:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.838 00:53:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.838 ************************************ 00:07:11.838 START TEST accel_fill 00:07:11.838 ************************************ 00:07:11.838 00:53:01 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:11.838 00:53:01 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:11.838 00:53:01 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:11.838 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:11.838 00:53:01 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:11.838 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:11.838 00:53:01 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:11.838 00:53:01 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:11.838 00:53:01 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.838 00:53:01 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.838 00:53:01 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.838 00:53:01 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.838 00:53:01 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.838 00:53:01 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:11.838 00:53:01 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:11.838 [2024-07-14 00:53:01.041840] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:11.838 [2024-07-14 00:53:01.041932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021452 ] 00:07:11.838 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.838 [2024-07-14 00:53:01.101966] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.838 [2024-07-14 00:53:01.195198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.096 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.097 00:53:01 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:12.097 00:53:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.097 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.097 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.097 00:53:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.097 00:53:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.097 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.097 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.097 00:53:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.097 00:53:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.097 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.097 00:53:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:13.032 00:53:02 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.032 00:07:13.032 real 0m1.407s 00:07:13.032 user 0m1.269s 00:07:13.032 sys 0m0.141s 00:07:13.032 00:53:02 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.032 00:53:02 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:13.032 ************************************ 00:07:13.032 END TEST accel_fill 00:07:13.032 ************************************ 00:07:13.290 00:53:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.290 00:53:02 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:13.290 00:53:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:13.290 00:53:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.290 00:53:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.290 ************************************ 00:07:13.290 START TEST accel_copy_crc32c 00:07:13.290 ************************************ 00:07:13.290 00:53:02 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:13.290 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:13.290 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:13.290 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.290 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:13.290 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.290 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:13.290 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:13.290 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.290 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.290 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.290 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.290 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.290 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:13.290 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:13.290 [2024-07-14 00:53:02.505512] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:13.290 [2024-07-14 00:53:02.505581] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021605 ] 00:07:13.290 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.291 [2024-07-14 00:53:02.568537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.291 [2024-07-14 00:53:02.660590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.551 00:53:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.485 00:07:14.485 real 0m1.405s 00:07:14.485 user 0m1.260s 00:07:14.485 sys 0m0.148s 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.485 00:53:03 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:14.485 ************************************ 00:07:14.485 END TEST accel_copy_crc32c 00:07:14.485 ************************************ 00:07:14.743 00:53:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.743 00:53:03 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:14.743 00:53:03 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:14.743 00:53:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.743 00:53:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.743 ************************************ 00:07:14.743 START TEST accel_copy_crc32c_C2 00:07:14.743 ************************************ 00:07:14.743 00:53:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:14.743 00:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.743 00:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:14.743 00:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.743 00:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:14.743 00:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.743 00:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:14.743 00:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.743 00:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.743 00:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.743 00:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.743 00:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.743 00:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.743 00:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:14.743 00:53:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:14.743 [2024-07-14 00:53:03.955112] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:14.743 [2024-07-14 00:53:03.955185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021877 ] 00:07:14.743 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.743 [2024-07-14 00:53:04.019888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.743 [2024-07-14 00:53:04.113020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.002 00:53:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.938 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.197 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.197 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.197 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.197 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.197 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.197 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:16.197 00:53:05 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.197 00:07:16.197 real 0m1.414s 00:07:16.197 user 0m1.269s 00:07:16.197 sys 0m0.148s 00:07:16.197 00:53:05 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.197 00:53:05 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:16.197 ************************************ 00:07:16.197 END TEST accel_copy_crc32c_C2 00:07:16.197 ************************************ 00:07:16.197 00:53:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.197 00:53:05 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:16.197 00:53:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:16.197 00:53:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.197 00:53:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.197 ************************************ 00:07:16.197 START TEST accel_dualcast 00:07:16.198 ************************************ 00:07:16.198 00:53:05 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:16.198 00:53:05 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:16.198 00:53:05 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:16.198 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.198 00:53:05 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:16.198 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.198 00:53:05 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:16.198 00:53:05 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:16.198 00:53:05 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.198 00:53:05 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.198 00:53:05 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.198 00:53:05 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.198 00:53:05 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.198 00:53:05 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:16.198 00:53:05 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:16.198 [2024-07-14 00:53:05.418199] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:16.198 [2024-07-14 00:53:05.418262] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022036 ] 00:07:16.198 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.198 [2024-07-14 00:53:05.484029] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.198 [2024-07-14 00:53:05.578607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.458 00:53:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.395 00:53:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.395 00:53:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.395 00:53:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.395 00:53:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.395 00:53:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.395 00:53:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.395 00:53:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.395 00:53:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.395 00:53:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.395 00:53:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.395 00:53:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.395 00:53:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.395 00:53:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.395 00:53:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.395 00:53:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.395 00:53:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.655 00:53:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.655 00:53:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.655 00:53:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.655 00:53:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.655 00:53:06 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.655 00:53:06 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.655 00:53:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.655 00:53:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.655 00:53:06 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.655 00:53:06 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:17.655 00:53:06 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.655 00:07:17.655 real 0m1.411s 00:07:17.655 user 0m1.256s 00:07:17.655 sys 0m0.155s 00:07:17.655 00:53:06 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.655 00:53:06 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:17.655 ************************************ 00:07:17.655 END TEST accel_dualcast 00:07:17.655 ************************************ 00:07:17.655 00:53:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.655 00:53:06 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:17.655 00:53:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:17.655 00:53:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.655 00:53:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.655 ************************************ 00:07:17.655 START TEST accel_compare 00:07:17.655 ************************************ 00:07:17.655 00:53:06 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:17.655 00:53:06 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:17.655 00:53:06 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:17.655 00:53:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.655 00:53:06 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:17.655 00:53:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.655 00:53:06 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:17.655 00:53:06 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:17.655 00:53:06 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.655 00:53:06 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.655 00:53:06 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.655 00:53:06 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.655 00:53:06 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.655 00:53:06 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:17.655 00:53:06 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:17.655 [2024-07-14 00:53:06.873977] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:17.655 [2024-07-14 00:53:06.874040] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022197 ] 00:07:17.655 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.655 [2024-07-14 00:53:06.934560] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.655 [2024-07-14 00:53:07.027485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.914 00:53:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:18.852 00:53:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:18.852 00:53:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:18.852 00:53:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:18.853 00:53:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:18.853 00:53:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:18.853 00:53:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:18.853 00:53:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:18.853 00:53:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:18.853 00:53:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:18.853 00:53:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:18.853 00:53:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:18.853 00:53:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:18.853 00:53:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:18.853 00:53:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:18.853 00:53:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:18.853 00:53:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:18.853 00:53:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:18.853 00:53:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:18.853 00:53:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:18.853 00:53:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:18.853 00:53:08 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.113 00:53:08 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.113 00:53:08 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.113 00:53:08 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.113 00:53:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.113 00:53:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:19.113 00:53:08 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.113 00:07:19.113 real 0m1.410s 00:07:19.113 user 0m1.272s 00:07:19.113 sys 0m0.140s 00:07:19.113 00:53:08 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.113 00:53:08 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:19.113 ************************************ 00:07:19.113 END TEST accel_compare 00:07:19.113 ************************************ 00:07:19.113 00:53:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.113 00:53:08 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:19.113 00:53:08 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:19.113 00:53:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.113 00:53:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.113 ************************************ 00:07:19.113 START TEST accel_xor 00:07:19.113 ************************************ 00:07:19.113 00:53:08 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:19.113 00:53:08 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:19.113 00:53:08 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:19.113 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.113 00:53:08 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:19.113 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.113 00:53:08 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:19.113 00:53:08 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:19.113 00:53:08 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.113 00:53:08 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.113 00:53:08 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.113 00:53:08 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.113 00:53:08 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.113 00:53:08 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:19.113 00:53:08 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:19.113 [2024-07-14 00:53:08.328062] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:19.113 [2024-07-14 00:53:08.328122] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022350 ] 00:07:19.113 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.113 [2024-07-14 00:53:08.389014] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.113 [2024-07-14 00:53:08.481555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.371 00:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.371 00:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.371 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.371 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.371 00:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.371 00:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.371 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.371 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.371 00:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:19.371 00:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.371 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.372 00:53:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:20.310 00:53:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.310 00:07:20.310 real 0m1.405s 00:07:20.310 user 0m1.265s 00:07:20.310 sys 0m0.142s 00:07:20.310 00:53:09 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.310 00:53:09 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:20.310 ************************************ 00:07:20.310 END TEST accel_xor 00:07:20.310 ************************************ 00:07:20.569 00:53:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.569 00:53:09 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:20.569 00:53:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:20.569 00:53:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.569 00:53:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.569 ************************************ 00:07:20.569 START TEST accel_xor 00:07:20.569 ************************************ 00:07:20.569 00:53:09 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:20.569 00:53:09 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:20.569 00:53:09 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:20.569 00:53:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.569 00:53:09 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:20.569 00:53:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.569 00:53:09 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:20.569 00:53:09 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:20.569 00:53:09 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.569 00:53:09 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.569 00:53:09 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.569 00:53:09 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.569 00:53:09 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.569 00:53:09 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:20.569 00:53:09 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:20.569 [2024-07-14 00:53:09.779874] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:20.569 [2024-07-14 00:53:09.779981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022622 ] 00:07:20.569 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.569 [2024-07-14 00:53:09.842138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.569 [2024-07-14 00:53:09.943245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.829 00:53:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:21.765 00:53:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.765 00:07:21.765 real 0m1.409s 00:07:21.765 user 0m1.262s 00:07:21.765 sys 0m0.149s 00:07:21.765 00:53:11 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.765 00:53:11 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:21.765 ************************************ 00:07:21.765 END TEST accel_xor 00:07:21.765 ************************************ 00:07:22.024 00:53:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.024 00:53:11 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:22.024 00:53:11 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:22.024 00:53:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.024 00:53:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.024 ************************************ 00:07:22.024 START TEST accel_dif_verify 00:07:22.024 ************************************ 00:07:22.024 00:53:11 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:22.024 00:53:11 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:22.024 00:53:11 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:22.024 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.024 00:53:11 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:22.024 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.024 00:53:11 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:22.024 00:53:11 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:22.024 00:53:11 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.024 00:53:11 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.024 00:53:11 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.024 00:53:11 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.024 00:53:11 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.024 00:53:11 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:22.024 00:53:11 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:22.024 [2024-07-14 00:53:11.236364] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:22.024 [2024-07-14 00:53:11.236433] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022783 ] 00:07:22.024 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.024 [2024-07-14 00:53:11.297575] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.024 [2024-07-14 00:53:11.390877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.284 00:53:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:23.221 00:53:12 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.221 00:07:23.221 real 0m1.410s 00:07:23.221 user 0m1.271s 00:07:23.221 sys 0m0.143s 00:07:23.221 00:53:12 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.221 00:53:12 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:23.221 ************************************ 00:07:23.221 END TEST accel_dif_verify 00:07:23.221 ************************************ 00:07:23.481 00:53:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.481 00:53:12 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:23.481 00:53:12 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:23.481 00:53:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.481 00:53:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.481 ************************************ 00:07:23.481 START TEST accel_dif_generate 00:07:23.481 ************************************ 00:07:23.481 00:53:12 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:23.481 00:53:12 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:23.481 00:53:12 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:23.481 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.481 00:53:12 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:23.481 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.481 00:53:12 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:23.481 00:53:12 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:23.481 00:53:12 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.481 00:53:12 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.481 00:53:12 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.481 00:53:12 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.481 00:53:12 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.481 00:53:12 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:23.481 00:53:12 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:23.481 [2024-07-14 00:53:12.692092] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:23.481 [2024-07-14 00:53:12.692157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022942 ] 00:07:23.481 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.481 [2024-07-14 00:53:12.755065] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.481 [2024-07-14 00:53:12.847961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.740 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.741 00:53:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:24.677 00:53:14 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.677 00:07:24.677 real 0m1.407s 00:07:24.677 user 0m1.266s 00:07:24.677 sys 0m0.145s 00:07:24.677 00:53:14 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.677 00:53:14 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:24.677 ************************************ 00:07:24.677 END TEST accel_dif_generate 00:07:24.677 ************************************ 00:07:24.936 00:53:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.936 00:53:14 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:24.936 00:53:14 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:24.936 00:53:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.936 00:53:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.936 ************************************ 00:07:24.936 START TEST accel_dif_generate_copy 00:07:24.936 ************************************ 00:07:24.936 00:53:14 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:24.936 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:24.936 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:24.936 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.936 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:24.936 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.936 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:24.936 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:24.936 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.936 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.936 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.936 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.936 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.936 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:24.936 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:24.936 [2024-07-14 00:53:14.146189] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:24.936 [2024-07-14 00:53:14.146254] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023179 ] 00:07:24.936 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.936 [2024-07-14 00:53:14.208227] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.936 [2024-07-14 00:53:14.301363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.194 00:53:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.132 00:07:26.132 real 0m1.409s 00:07:26.132 user 0m1.261s 00:07:26.132 sys 0m0.150s 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.132 00:53:15 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:26.132 ************************************ 00:07:26.132 END TEST accel_dif_generate_copy 00:07:26.132 ************************************ 00:07:26.391 00:53:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.391 00:53:15 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:26.391 00:53:15 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.391 00:53:15 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:26.391 00:53:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.391 00:53:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.391 ************************************ 00:07:26.391 START TEST accel_comp 00:07:26.391 ************************************ 00:07:26.391 00:53:15 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.391 00:53:15 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:26.392 00:53:15 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:26.392 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.392 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.392 00:53:15 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.392 00:53:15 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.392 00:53:15 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:26.392 00:53:15 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.392 00:53:15 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.392 00:53:15 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.392 00:53:15 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.392 00:53:15 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.392 00:53:15 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:26.392 00:53:15 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:26.392 [2024-07-14 00:53:15.601735] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:26.392 [2024-07-14 00:53:15.601802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023361 ] 00:07:26.392 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.392 [2024-07-14 00:53:15.663342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.392 [2024-07-14 00:53:15.758782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.671 00:53:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:27.670 00:53:16 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.670 00:07:27.670 real 0m1.396s 00:07:27.670 user 0m1.266s 00:07:27.670 sys 0m0.133s 00:07:27.670 00:53:16 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.670 00:53:16 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:27.670 ************************************ 00:07:27.670 END TEST accel_comp 00:07:27.670 ************************************ 00:07:27.670 00:53:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.670 00:53:17 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:27.670 00:53:17 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:27.670 00:53:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.670 00:53:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.670 ************************************ 00:07:27.670 START TEST accel_decomp 00:07:27.670 ************************************ 00:07:27.670 00:53:17 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:27.670 00:53:17 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:27.670 00:53:17 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:27.670 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.670 00:53:17 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:27.670 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.670 00:53:17 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:27.670 00:53:17 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:27.670 00:53:17 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.670 00:53:17 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.670 00:53:17 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.670 00:53:17 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.670 00:53:17 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.670 00:53:17 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:27.670 00:53:17 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:27.670 [2024-07-14 00:53:17.041294] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:27.670 [2024-07-14 00:53:17.041356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023532 ] 00:07:27.670 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.931 [2024-07-14 00:53:17.104496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.931 [2024-07-14 00:53:17.197857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.931 00:53:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:29.311 00:53:18 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.311 00:07:29.311 real 0m1.410s 00:07:29.311 user 0m1.265s 00:07:29.311 sys 0m0.149s 00:07:29.311 00:53:18 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.311 00:53:18 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:29.311 ************************************ 00:07:29.311 END TEST accel_decomp 00:07:29.311 ************************************ 00:07:29.311 00:53:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.311 00:53:18 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:29.311 00:53:18 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:29.311 00:53:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.311 00:53:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.311 ************************************ 00:07:29.311 START TEST accel_decomp_full 00:07:29.311 ************************************ 00:07:29.311 00:53:18 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:29.311 [2024-07-14 00:53:18.498092] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:29.311 [2024-07-14 00:53:18.498157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023683 ] 00:07:29.311 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.311 [2024-07-14 00:53:18.557957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.311 [2024-07-14 00:53:18.651212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.311 00:53:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.312 00:53:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:30.695 00:53:19 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.695 00:07:30.695 real 0m1.415s 00:07:30.695 user 0m1.273s 00:07:30.695 sys 0m0.145s 00:07:30.695 00:53:19 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.695 00:53:19 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:30.695 ************************************ 00:07:30.695 END TEST accel_decomp_full 00:07:30.695 ************************************ 00:07:30.695 00:53:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.695 00:53:19 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:30.695 00:53:19 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:30.695 00:53:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.695 00:53:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.695 ************************************ 00:07:30.695 START TEST accel_decomp_mcore 00:07:30.695 ************************************ 00:07:30.695 00:53:19 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:30.695 00:53:19 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:30.695 00:53:19 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:30.695 00:53:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.695 00:53:19 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:30.695 00:53:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.695 00:53:19 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:30.695 00:53:19 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:30.695 00:53:19 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.695 00:53:19 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.695 00:53:19 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.695 00:53:19 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.695 00:53:19 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.695 00:53:19 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:30.695 00:53:19 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:30.695 [2024-07-14 00:53:19.956426] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:30.695 [2024-07-14 00:53:19.956480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023957 ] 00:07:30.695 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.695 [2024-07-14 00:53:20.018768] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.954 [2024-07-14 00:53:20.124764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.954 [2024-07-14 00:53:20.124818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.954 [2024-07-14 00:53:20.124934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.954 [2024-07-14 00:53:20.124937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.954 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.955 00:53:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.336 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:32.336 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.336 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.336 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.336 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:32.336 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.336 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.336 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.336 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:32.336 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.336 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.336 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.336 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:32.336 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.336 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.336 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.336 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:32.336 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.337 00:07:32.337 real 0m1.425s 00:07:32.337 user 0m4.731s 00:07:32.337 sys 0m0.151s 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.337 00:53:21 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:32.337 ************************************ 00:07:32.337 END TEST accel_decomp_mcore 00:07:32.337 ************************************ 00:07:32.337 00:53:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.337 00:53:21 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:32.337 00:53:21 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:32.337 00:53:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.337 00:53:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.337 ************************************ 00:07:32.337 START TEST accel_decomp_full_mcore 00:07:32.337 ************************************ 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:32.337 [2024-07-14 00:53:21.428353] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:32.337 [2024-07-14 00:53:21.428423] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024113 ] 00:07:32.337 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.337 [2024-07-14 00:53:21.492447] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.337 [2024-07-14 00:53:21.595149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.337 [2024-07-14 00:53:21.595206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.337 [2024-07-14 00:53:21.595259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.337 [2024-07-14 00:53:21.595262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.337 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.338 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.338 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.338 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.338 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.338 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.338 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.338 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.338 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.338 00:53:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.717 00:07:33.717 real 0m1.426s 00:07:33.717 user 0m4.729s 00:07:33.717 sys 0m0.158s 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.717 00:53:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:33.717 ************************************ 00:07:33.717 END TEST accel_decomp_full_mcore 00:07:33.717 ************************************ 00:07:33.717 00:53:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.717 00:53:22 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:33.717 00:53:22 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:33.717 00:53:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.717 00:53:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.717 ************************************ 00:07:33.717 START TEST accel_decomp_mthread 00:07:33.717 ************************************ 00:07:33.717 00:53:22 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:33.717 00:53:22 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:33.717 00:53:22 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:33.717 00:53:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:22 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:33.717 00:53:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:22 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:33.717 00:53:22 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:33.717 00:53:22 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.717 00:53:22 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.717 00:53:22 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.717 00:53:22 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.717 00:53:22 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.717 00:53:22 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:33.717 00:53:22 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:33.717 [2024-07-14 00:53:22.900975] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:33.717 [2024-07-14 00:53:22.901040] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024283 ] 00:07:33.717 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.717 [2024-07-14 00:53:22.962467] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.717 [2024-07-14 00:53:23.058877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:33.717 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.718 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:33.718 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.718 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.718 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:33.718 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.718 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.718 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.718 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:33.718 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.718 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.718 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.984 00:53:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.923 00:07:34.923 real 0m1.424s 00:07:34.923 user 0m1.275s 00:07:34.923 sys 0m0.152s 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.923 00:53:24 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:34.923 ************************************ 00:07:34.923 END TEST accel_decomp_mthread 00:07:34.923 ************************************ 00:07:34.923 00:53:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.923 00:53:24 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.923 00:53:24 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:34.923 00:53:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.923 00:53:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.182 ************************************ 00:07:35.182 START TEST accel_decomp_full_mthread 00:07:35.182 ************************************ 00:07:35.182 00:53:24 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:35.182 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:35.183 [2024-07-14 00:53:24.372210] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:35.183 [2024-07-14 00:53:24.372274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024476 ] 00:07:35.183 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.183 [2024-07-14 00:53:24.428718] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.183 [2024-07-14 00:53:24.522664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.183 00:53:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:36.560 00:53:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.560 00:07:36.560 real 0m1.448s 00:07:36.560 user 0m1.309s 00:07:36.560 sys 0m0.142s 00:07:36.561 00:53:25 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.561 00:53:25 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:36.561 ************************************ 00:07:36.561 END TEST accel_decomp_full_mthread 00:07:36.561 ************************************ 00:07:36.561 00:53:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.561 00:53:25 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:36.561 00:53:25 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:36.561 00:53:25 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:36.561 00:53:25 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:36.561 00:53:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.561 00:53:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.561 00:53:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.561 00:53:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.561 00:53:25 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.561 00:53:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.561 00:53:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.561 00:53:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:36.561 00:53:25 accel -- accel/accel.sh@41 -- # jq -r . 00:07:36.561 ************************************ 00:07:36.561 START TEST accel_dif_functional_tests 00:07:36.561 ************************************ 00:07:36.561 00:53:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:36.561 [2024-07-14 00:53:25.886434] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:36.561 [2024-07-14 00:53:25.886493] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024709 ] 00:07:36.561 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.561 [2024-07-14 00:53:25.946176] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.820 [2024-07-14 00:53:26.044000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.820 [2024-07-14 00:53:26.044051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.820 [2024-07-14 00:53:26.044055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.820 00:07:36.820 00:07:36.820 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.820 http://cunit.sourceforge.net/ 00:07:36.820 00:07:36.820 00:07:36.820 Suite: accel_dif 00:07:36.820 Test: verify: DIF generated, GUARD check ...passed 00:07:36.820 Test: verify: DIF generated, APPTAG check ...passed 00:07:36.820 Test: verify: DIF generated, REFTAG check ...passed 00:07:36.820 Test: verify: DIF not generated, GUARD check ...[2024-07-14 00:53:26.134304] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:36.820 passed 00:07:36.820 Test: verify: DIF not generated, APPTAG check ...[2024-07-14 00:53:26.134367] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:36.820 passed 00:07:36.820 Test: verify: DIF not generated, REFTAG check ...[2024-07-14 00:53:26.134399] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:36.820 passed 00:07:36.820 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:36.820 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-14 00:53:26.134466] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:36.820 passed 00:07:36.820 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:36.820 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:36.820 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:36.820 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-14 00:53:26.134591] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:36.820 passed 00:07:36.820 Test: verify copy: DIF generated, GUARD check ...passed 00:07:36.820 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:36.820 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:36.820 Test: verify copy: DIF not generated, GUARD check ...[2024-07-14 00:53:26.134735] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:36.820 passed 00:07:36.820 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-14 00:53:26.134768] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:36.820 passed 00:07:36.820 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-14 00:53:26.134798] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:36.820 passed 00:07:36.820 Test: generate copy: DIF generated, GUARD check ...passed 00:07:36.820 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:36.820 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:36.820 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:36.820 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:36.820 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:36.820 Test: generate copy: iovecs-len validate ...[2024-07-14 00:53:26.135032] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:36.820 passed 00:07:36.820 Test: generate copy: buffer alignment validate ...passed 00:07:36.820 00:07:36.820 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.820 suites 1 1 n/a 0 0 00:07:36.820 tests 26 26 26 0 0 00:07:36.820 asserts 115 115 115 0 n/a 00:07:36.820 00:07:36.820 Elapsed time = 0.002 seconds 00:07:37.078 00:07:37.078 real 0m0.484s 00:07:37.078 user 0m0.723s 00:07:37.078 sys 0m0.182s 00:07:37.078 00:53:26 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.078 00:53:26 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:37.078 ************************************ 00:07:37.078 END TEST accel_dif_functional_tests 00:07:37.078 ************************************ 00:07:37.078 00:53:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.078 00:07:37.078 real 0m31.761s 00:07:37.078 user 0m35.112s 00:07:37.078 sys 0m4.599s 00:07:37.078 00:53:26 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.078 00:53:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.078 ************************************ 00:07:37.078 END TEST accel 00:07:37.078 ************************************ 00:07:37.078 00:53:26 -- common/autotest_common.sh@1142 -- # return 0 00:07:37.078 00:53:26 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:37.078 00:53:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:37.078 00:53:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.078 00:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:37.078 ************************************ 00:07:37.078 START TEST accel_rpc 00:07:37.078 ************************************ 00:07:37.078 00:53:26 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:37.078 * Looking for test storage... 00:07:37.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:37.078 00:53:26 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:37.078 00:53:26 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1024780 00:07:37.078 00:53:26 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:37.078 00:53:26 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1024780 00:07:37.078 00:53:26 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1024780 ']' 00:07:37.078 00:53:26 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.078 00:53:26 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:37.079 00:53:26 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.079 00:53:26 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:37.079 00:53:26 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.337 [2024-07-14 00:53:26.511368] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:37.337 [2024-07-14 00:53:26.511460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024780 ] 00:07:37.337 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.337 [2024-07-14 00:53:26.568661] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.337 [2024-07-14 00:53:26.656875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.337 00:53:26 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:37.337 00:53:26 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:37.337 00:53:26 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:37.337 00:53:26 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:37.337 00:53:26 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:37.337 00:53:26 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:37.337 00:53:26 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:37.337 00:53:26 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:37.337 00:53:26 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.337 00:53:26 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.597 ************************************ 00:07:37.597 START TEST accel_assign_opcode 00:07:37.597 ************************************ 00:07:37.597 00:53:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:37.597 00:53:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:37.597 00:53:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.597 00:53:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:37.597 [2024-07-14 00:53:26.757572] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:37.597 00:53:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.597 00:53:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:37.597 00:53:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.597 00:53:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:37.597 [2024-07-14 00:53:26.765578] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:37.597 00:53:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.597 00:53:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:37.597 00:53:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.597 00:53:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:37.855 00:53:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.855 00:53:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:37.855 00:53:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.855 00:53:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:37.855 00:53:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:37.855 00:53:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:37.855 00:53:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.855 software 00:07:37.855 00:07:37.855 real 0m0.298s 00:07:37.855 user 0m0.041s 00:07:37.855 sys 0m0.006s 00:07:37.855 00:53:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.855 00:53:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:37.856 ************************************ 00:07:37.856 END TEST accel_assign_opcode 00:07:37.856 ************************************ 00:07:37.856 00:53:27 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:37.856 00:53:27 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1024780 00:07:37.856 00:53:27 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1024780 ']' 00:07:37.856 00:53:27 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1024780 00:07:37.856 00:53:27 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:37.856 00:53:27 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.856 00:53:27 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1024780 00:07:37.856 00:53:27 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:37.856 00:53:27 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:37.856 00:53:27 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1024780' 00:07:37.856 killing process with pid 1024780 00:07:37.856 00:53:27 accel_rpc -- common/autotest_common.sh@967 -- # kill 1024780 00:07:37.856 00:53:27 accel_rpc -- common/autotest_common.sh@972 -- # wait 1024780 00:07:38.113 00:07:38.113 real 0m1.092s 00:07:38.113 user 0m1.035s 00:07:38.113 sys 0m0.429s 00:07:38.113 00:53:27 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.113 00:53:27 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.113 ************************************ 00:07:38.113 END TEST accel_rpc 00:07:38.113 ************************************ 00:07:38.113 00:53:27 -- common/autotest_common.sh@1142 -- # return 0 00:07:38.371 00:53:27 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:38.371 00:53:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:38.371 00:53:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.371 00:53:27 -- common/autotest_common.sh@10 -- # set +x 00:07:38.371 ************************************ 00:07:38.371 START TEST app_cmdline 00:07:38.371 ************************************ 00:07:38.371 00:53:27 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:38.371 * Looking for test storage... 00:07:38.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:38.371 00:53:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:38.371 00:53:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1024984 00:07:38.371 00:53:27 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:38.371 00:53:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1024984 00:07:38.371 00:53:27 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1024984 ']' 00:07:38.371 00:53:27 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.372 00:53:27 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.372 00:53:27 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.372 00:53:27 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.372 00:53:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.372 [2024-07-14 00:53:27.665199] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:38.372 [2024-07-14 00:53:27.665292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024984 ] 00:07:38.372 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.372 [2024-07-14 00:53:27.730693] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.632 [2024-07-14 00:53:27.817707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.890 00:53:28 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.890 00:53:28 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:38.890 00:53:28 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:39.148 { 00:07:39.148 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:07:39.148 "fields": { 00:07:39.148 "major": 24, 00:07:39.148 "minor": 9, 00:07:39.148 "patch": 0, 00:07:39.148 "suffix": "-pre", 00:07:39.148 "commit": "719d03c6a" 00:07:39.148 } 00:07:39.148 } 00:07:39.148 00:53:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:39.148 00:53:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:39.148 00:53:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:39.148 00:53:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:39.148 00:53:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:39.148 00:53:28 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.148 00:53:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:39.148 00:53:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:39.148 00:53:28 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:39.148 00:53:28 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.148 00:53:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:39.148 00:53:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:39.148 00:53:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:39.148 00:53:28 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:39.148 00:53:28 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:39.148 00:53:28 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.148 00:53:28 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.148 00:53:28 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.148 00:53:28 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.148 00:53:28 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.148 00:53:28 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.148 00:53:28 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.148 00:53:28 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:39.148 00:53:28 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:39.406 request: 00:07:39.406 { 00:07:39.406 "method": "env_dpdk_get_mem_stats", 00:07:39.406 "req_id": 1 00:07:39.406 } 00:07:39.406 Got JSON-RPC error response 00:07:39.406 response: 00:07:39.406 { 00:07:39.406 "code": -32601, 00:07:39.406 "message": "Method not found" 00:07:39.406 } 00:07:39.406 00:53:28 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:39.406 00:53:28 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:39.406 00:53:28 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:39.407 00:53:28 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:39.407 00:53:28 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1024984 00:07:39.407 00:53:28 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1024984 ']' 00:07:39.407 00:53:28 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1024984 00:07:39.407 00:53:28 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:39.407 00:53:28 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:39.407 00:53:28 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1024984 00:07:39.407 00:53:28 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:39.407 00:53:28 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:39.407 00:53:28 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1024984' 00:07:39.407 killing process with pid 1024984 00:07:39.407 00:53:28 app_cmdline -- common/autotest_common.sh@967 -- # kill 1024984 00:07:39.407 00:53:28 app_cmdline -- common/autotest_common.sh@972 -- # wait 1024984 00:07:39.971 00:07:39.971 real 0m1.537s 00:07:39.971 user 0m1.902s 00:07:39.971 sys 0m0.474s 00:07:39.971 00:53:29 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.971 00:53:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:39.971 ************************************ 00:07:39.971 END TEST app_cmdline 00:07:39.971 ************************************ 00:07:39.971 00:53:29 -- common/autotest_common.sh@1142 -- # return 0 00:07:39.971 00:53:29 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:39.971 00:53:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:39.971 00:53:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.971 00:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:39.971 ************************************ 00:07:39.971 START TEST version 00:07:39.971 ************************************ 00:07:39.971 00:53:29 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:39.971 * Looking for test storage... 00:07:39.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:39.971 00:53:29 version -- app/version.sh@17 -- # get_header_version major 00:07:39.971 00:53:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.971 00:53:29 version -- app/version.sh@14 -- # cut -f2 00:07:39.971 00:53:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.971 00:53:29 version -- app/version.sh@17 -- # major=24 00:07:39.971 00:53:29 version -- app/version.sh@18 -- # get_header_version minor 00:07:39.971 00:53:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.971 00:53:29 version -- app/version.sh@14 -- # cut -f2 00:07:39.971 00:53:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.971 00:53:29 version -- app/version.sh@18 -- # minor=9 00:07:39.971 00:53:29 version -- app/version.sh@19 -- # get_header_version patch 00:07:39.971 00:53:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.971 00:53:29 version -- app/version.sh@14 -- # cut -f2 00:07:39.971 00:53:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.971 00:53:29 version -- app/version.sh@19 -- # patch=0 00:07:39.971 00:53:29 version -- app/version.sh@20 -- # get_header_version suffix 00:07:39.971 00:53:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:39.971 00:53:29 version -- app/version.sh@14 -- # cut -f2 00:07:39.971 00:53:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.971 00:53:29 version -- app/version.sh@20 -- # suffix=-pre 00:07:39.971 00:53:29 version -- app/version.sh@22 -- # version=24.9 00:07:39.971 00:53:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:39.971 00:53:29 version -- app/version.sh@28 -- # version=24.9rc0 00:07:39.971 00:53:29 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:39.971 00:53:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:39.971 00:53:29 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:39.971 00:53:29 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:39.971 00:07:39.971 real 0m0.113s 00:07:39.971 user 0m0.071s 00:07:39.971 sys 0m0.064s 00:07:39.971 00:53:29 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.971 00:53:29 version -- common/autotest_common.sh@10 -- # set +x 00:07:39.971 ************************************ 00:07:39.971 END TEST version 00:07:39.971 ************************************ 00:07:39.971 00:53:29 -- common/autotest_common.sh@1142 -- # return 0 00:07:39.971 00:53:29 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:39.971 00:53:29 -- spdk/autotest.sh@198 -- # uname -s 00:07:39.971 00:53:29 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:39.971 00:53:29 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:39.971 00:53:29 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:39.971 00:53:29 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:39.971 00:53:29 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:39.971 00:53:29 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:39.971 00:53:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:39.971 00:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:39.971 00:53:29 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:39.971 00:53:29 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:39.971 00:53:29 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:39.971 00:53:29 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:39.971 00:53:29 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:39.971 00:53:29 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:39.971 00:53:29 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:39.971 00:53:29 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:39.971 00:53:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.971 00:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:39.971 ************************************ 00:07:39.971 START TEST nvmf_tcp 00:07:39.971 ************************************ 00:07:39.971 00:53:29 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:39.971 * Looking for test storage... 00:07:39.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:39.972 00:53:29 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:39.972 00:53:29 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:39.972 00:53:29 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.972 00:53:29 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:39.972 00:53:29 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.972 00:53:29 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.972 00:53:29 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.972 00:53:29 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.972 00:53:29 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.972 00:53:29 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.972 00:53:29 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.972 00:53:29 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.972 00:53:29 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.972 00:53:29 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.230 00:53:29 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.230 00:53:29 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.230 00:53:29 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.230 00:53:29 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.230 00:53:29 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.230 00:53:29 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.230 00:53:29 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:40.230 00:53:29 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:40.230 00:53:29 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.230 00:53:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:40.230 00:53:29 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:40.230 00:53:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:40.230 00:53:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.230 00:53:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:40.230 ************************************ 00:07:40.230 START TEST nvmf_example 00:07:40.230 ************************************ 00:07:40.230 00:53:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:40.230 * Looking for test storage... 00:07:40.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:40.231 00:53:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:42.157 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:42.157 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.157 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:42.158 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:42.158 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:42.158 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:42.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:07:42.426 00:07:42.426 --- 10.0.0.2 ping statistics --- 00:07:42.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.426 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:07:42.426 00:07:42.426 --- 10.0.0.1 ping statistics --- 00:07:42.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.426 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1027000 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1027000 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1027000 ']' 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.426 00:53:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.427 00:53:31 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:42.427 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.359 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.359 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:43.359 00:53:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:43.359 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:43.360 00:53:32 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:43.360 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.583 Initializing NVMe Controllers 00:07:55.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:55.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:55.584 Initialization complete. Launching workers. 00:07:55.584 ======================================================== 00:07:55.584 Latency(us) 00:07:55.584 Device Information : IOPS MiB/s Average min max 00:07:55.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14894.40 58.18 4296.59 877.38 15948.04 00:07:55.584 ======================================================== 00:07:55.584 Total : 14894.40 58.18 4296.59 877.38 15948.04 00:07:55.584 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:55.584 rmmod nvme_tcp 00:07:55.584 rmmod nvme_fabrics 00:07:55.584 rmmod nvme_keyring 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1027000 ']' 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1027000 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1027000 ']' 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1027000 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1027000 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1027000' 00:07:55.584 killing process with pid 1027000 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1027000 00:07:55.584 00:53:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1027000 00:07:55.584 nvmf threads initialize successfully 00:07:55.584 bdev subsystem init successfully 00:07:55.584 created a nvmf target service 00:07:55.584 create targets's poll groups done 00:07:55.584 all subsystems of target started 00:07:55.584 nvmf target is running 00:07:55.584 all subsystems of target stopped 00:07:55.584 destroy targets's poll groups done 00:07:55.584 destroyed the nvmf target service 00:07:55.584 bdev subsystem finish successfully 00:07:55.584 nvmf threads destroy successfully 00:07:55.584 00:53:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:55.584 00:53:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:55.584 00:53:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:55.584 00:53:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:55.584 00:53:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:55.584 00:53:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.584 00:53:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.584 00:53:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.843 00:53:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:55.843 00:53:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:55.843 00:53:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:55.843 00:53:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:55.843 00:07:55.843 real 0m15.841s 00:07:55.843 user 0m44.892s 00:07:55.843 sys 0m3.195s 00:07:56.104 00:53:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.104 00:53:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:56.104 ************************************ 00:07:56.104 END TEST nvmf_example 00:07:56.104 ************************************ 00:07:56.104 00:53:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:56.104 00:53:45 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:56.104 00:53:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:56.104 00:53:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.104 00:53:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:56.104 ************************************ 00:07:56.104 START TEST nvmf_filesystem 00:07:56.104 ************************************ 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:56.104 * Looking for test storage... 00:07:56.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:56.104 00:53:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:56.105 #define SPDK_CONFIG_H 00:07:56.105 #define SPDK_CONFIG_APPS 1 00:07:56.105 #define SPDK_CONFIG_ARCH native 00:07:56.105 #undef SPDK_CONFIG_ASAN 00:07:56.105 #undef SPDK_CONFIG_AVAHI 00:07:56.105 #undef SPDK_CONFIG_CET 00:07:56.105 #define SPDK_CONFIG_COVERAGE 1 00:07:56.105 #define SPDK_CONFIG_CROSS_PREFIX 00:07:56.105 #undef SPDK_CONFIG_CRYPTO 00:07:56.105 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:56.105 #undef SPDK_CONFIG_CUSTOMOCF 00:07:56.105 #undef SPDK_CONFIG_DAOS 00:07:56.105 #define SPDK_CONFIG_DAOS_DIR 00:07:56.105 #define SPDK_CONFIG_DEBUG 1 00:07:56.105 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:56.105 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:56.105 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:56.105 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:56.105 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:56.105 #undef SPDK_CONFIG_DPDK_UADK 00:07:56.105 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:56.105 #define SPDK_CONFIG_EXAMPLES 1 00:07:56.105 #undef SPDK_CONFIG_FC 00:07:56.105 #define SPDK_CONFIG_FC_PATH 00:07:56.105 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:56.105 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:56.105 #undef SPDK_CONFIG_FUSE 00:07:56.105 #undef SPDK_CONFIG_FUZZER 00:07:56.105 #define SPDK_CONFIG_FUZZER_LIB 00:07:56.105 #undef SPDK_CONFIG_GOLANG 00:07:56.105 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:56.105 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:56.105 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:56.105 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:56.105 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:56.105 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:56.105 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:56.105 #define SPDK_CONFIG_IDXD 1 00:07:56.105 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:56.105 #undef SPDK_CONFIG_IPSEC_MB 00:07:56.105 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:56.105 #define SPDK_CONFIG_ISAL 1 00:07:56.105 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:56.105 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:56.105 #define SPDK_CONFIG_LIBDIR 00:07:56.105 #undef SPDK_CONFIG_LTO 00:07:56.105 #define SPDK_CONFIG_MAX_LCORES 128 00:07:56.105 #define SPDK_CONFIG_NVME_CUSE 1 00:07:56.105 #undef SPDK_CONFIG_OCF 00:07:56.105 #define SPDK_CONFIG_OCF_PATH 00:07:56.105 #define SPDK_CONFIG_OPENSSL_PATH 00:07:56.105 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:56.105 #define SPDK_CONFIG_PGO_DIR 00:07:56.105 #undef SPDK_CONFIG_PGO_USE 00:07:56.105 #define SPDK_CONFIG_PREFIX /usr/local 00:07:56.105 #undef SPDK_CONFIG_RAID5F 00:07:56.105 #undef SPDK_CONFIG_RBD 00:07:56.105 #define SPDK_CONFIG_RDMA 1 00:07:56.105 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:56.105 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:56.105 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:56.105 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:56.105 #define SPDK_CONFIG_SHARED 1 00:07:56.105 #undef SPDK_CONFIG_SMA 00:07:56.105 #define SPDK_CONFIG_TESTS 1 00:07:56.105 #undef SPDK_CONFIG_TSAN 00:07:56.105 #define SPDK_CONFIG_UBLK 1 00:07:56.105 #define SPDK_CONFIG_UBSAN 1 00:07:56.105 #undef SPDK_CONFIG_UNIT_TESTS 00:07:56.105 #undef SPDK_CONFIG_URING 00:07:56.105 #define SPDK_CONFIG_URING_PATH 00:07:56.105 #undef SPDK_CONFIG_URING_ZNS 00:07:56.105 #undef SPDK_CONFIG_USDT 00:07:56.105 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:56.105 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:56.105 #define SPDK_CONFIG_VFIO_USER 1 00:07:56.105 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:56.105 #define SPDK_CONFIG_VHOST 1 00:07:56.105 #define SPDK_CONFIG_VIRTIO 1 00:07:56.105 #undef SPDK_CONFIG_VTUNE 00:07:56.105 #define SPDK_CONFIG_VTUNE_DIR 00:07:56.105 #define SPDK_CONFIG_WERROR 1 00:07:56.105 #define SPDK_CONFIG_WPDK_DIR 00:07:56.105 #undef SPDK_CONFIG_XNVME 00:07:56.105 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:56.105 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v22.11.4 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:56.106 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1028726 ]] 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1028726 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.uejftv 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.uejftv/tests/target /tmp/spdk.uejftv 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=53466476544 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994708992 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8528232448 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941716480 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997352448 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390182912 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996217856 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1138688 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:56.107 * Looking for test storage... 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:56.107 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=53466476544 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=10742824960 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:56.108 00:53:45 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:58.016 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:58.016 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:58.016 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:58.016 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:58.017 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:58.017 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:58.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:07:58.277 00:07:58.277 --- 10.0.0.2 ping statistics --- 00:07:58.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.277 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:58.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:07:58.277 00:07:58.277 --- 10.0.0.1 ping statistics --- 00:07:58.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.277 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.277 ************************************ 00:07:58.277 START TEST nvmf_filesystem_no_in_capsule 00:07:58.277 ************************************ 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1030352 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:58.277 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1030352 00:07:58.278 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1030352 ']' 00:07:58.278 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.278 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:58.278 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.278 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:58.278 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.278 [2024-07-14 00:53:47.598002] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:58.278 [2024-07-14 00:53:47.598098] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.278 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.278 [2024-07-14 00:53:47.662367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.536 [2024-07-14 00:53:47.755191] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.536 [2024-07-14 00:53:47.755264] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.536 [2024-07-14 00:53:47.755278] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.536 [2024-07-14 00:53:47.755288] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.536 [2024-07-14 00:53:47.755297] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.536 [2024-07-14 00:53:47.755415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.537 [2024-07-14 00:53:47.755493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.537 [2024-07-14 00:53:47.755562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.537 [2024-07-14 00:53:47.755564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.537 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.537 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:58.537 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:58.537 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.537 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.537 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.537 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:58.537 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:58.537 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.537 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.537 [2024-07-14 00:53:47.914749] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.537 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.537 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:58.537 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.537 00:53:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.795 Malloc1 00:07:58.795 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.795 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:58.795 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.795 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.795 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.795 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:58.795 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.795 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.795 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.795 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:58.795 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.795 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.795 [2024-07-14 00:53:48.095043] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:58.796 { 00:07:58.796 "name": "Malloc1", 00:07:58.796 "aliases": [ 00:07:58.796 "dc95679c-19b1-4d6f-9c5d-572356821c53" 00:07:58.796 ], 00:07:58.796 "product_name": "Malloc disk", 00:07:58.796 "block_size": 512, 00:07:58.796 "num_blocks": 1048576, 00:07:58.796 "uuid": "dc95679c-19b1-4d6f-9c5d-572356821c53", 00:07:58.796 "assigned_rate_limits": { 00:07:58.796 "rw_ios_per_sec": 0, 00:07:58.796 "rw_mbytes_per_sec": 0, 00:07:58.796 "r_mbytes_per_sec": 0, 00:07:58.796 "w_mbytes_per_sec": 0 00:07:58.796 }, 00:07:58.796 "claimed": true, 00:07:58.796 "claim_type": "exclusive_write", 00:07:58.796 "zoned": false, 00:07:58.796 "supported_io_types": { 00:07:58.796 "read": true, 00:07:58.796 "write": true, 00:07:58.796 "unmap": true, 00:07:58.796 "flush": true, 00:07:58.796 "reset": true, 00:07:58.796 "nvme_admin": false, 00:07:58.796 "nvme_io": false, 00:07:58.796 "nvme_io_md": false, 00:07:58.796 "write_zeroes": true, 00:07:58.796 "zcopy": true, 00:07:58.796 "get_zone_info": false, 00:07:58.796 "zone_management": false, 00:07:58.796 "zone_append": false, 00:07:58.796 "compare": false, 00:07:58.796 "compare_and_write": false, 00:07:58.796 "abort": true, 00:07:58.796 "seek_hole": false, 00:07:58.796 "seek_data": false, 00:07:58.796 "copy": true, 00:07:58.796 "nvme_iov_md": false 00:07:58.796 }, 00:07:58.796 "memory_domains": [ 00:07:58.796 { 00:07:58.796 "dma_device_id": "system", 00:07:58.796 "dma_device_type": 1 00:07:58.796 }, 00:07:58.796 { 00:07:58.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.796 "dma_device_type": 2 00:07:58.796 } 00:07:58.796 ], 00:07:58.796 "driver_specific": {} 00:07:58.796 } 00:07:58.796 ]' 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:58.796 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:59.366 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:59.366 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:59.366 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:59.366 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:59.366 00:53:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:01.902 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:01.902 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:01.902 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:01.902 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:01.902 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:01.902 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:01.902 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:01.902 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:01.902 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:01.902 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:01.902 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:01.902 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:01.902 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:01.902 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:01.902 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:01.902 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:01.902 00:53:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:01.902 00:53:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:02.161 00:53:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:03.096 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:03.096 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:03.096 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:03.096 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.096 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.096 ************************************ 00:08:03.096 START TEST filesystem_ext4 00:08:03.096 ************************************ 00:08:03.096 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:03.096 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:03.096 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:03.096 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:03.096 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:03.096 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:03.096 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:03.096 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:03.096 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:03.096 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:03.096 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:03.096 mke2fs 1.46.5 (30-Dec-2021) 00:08:03.353 Discarding device blocks: 0/522240 done 00:08:03.353 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:03.353 Filesystem UUID: 88c797c8-632a-4ef0-a190-5bee70b1fd81 00:08:03.353 Superblock backups stored on blocks: 00:08:03.353 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:03.353 00:08:03.353 Allocating group tables: 0/64 done 00:08:03.353 Writing inode tables: 0/64 done 00:08:03.353 Creating journal (8192 blocks): done 00:08:03.353 Writing superblocks and filesystem accounting information: 0/64 done 00:08:03.353 00:08:03.353 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:03.353 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:03.613 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:03.613 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:03.613 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:03.613 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:03.613 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:03.613 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:03.613 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1030352 00:08:03.613 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:03.613 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:03.613 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:03.613 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:03.613 00:08:03.613 real 0m0.463s 00:08:03.613 user 0m0.019s 00:08:03.613 sys 0m0.047s 00:08:03.613 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.613 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:03.613 ************************************ 00:08:03.613 END TEST filesystem_ext4 00:08:03.613 ************************************ 00:08:03.613 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:03.613 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:03.613 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:03.613 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.613 00:53:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.613 ************************************ 00:08:03.613 START TEST filesystem_btrfs 00:08:03.613 ************************************ 00:08:03.613 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:03.613 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:03.613 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:03.613 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:03.613 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:03.613 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:03.613 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:03.613 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:03.613 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:03.613 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:03.613 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:04.181 btrfs-progs v6.6.2 00:08:04.181 See https://btrfs.readthedocs.io for more information. 00:08:04.181 00:08:04.181 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:04.182 NOTE: several default settings have changed in version 5.15, please make sure 00:08:04.182 this does not affect your deployments: 00:08:04.182 - DUP for metadata (-m dup) 00:08:04.182 - enabled no-holes (-O no-holes) 00:08:04.182 - enabled free-space-tree (-R free-space-tree) 00:08:04.182 00:08:04.182 Label: (null) 00:08:04.182 UUID: a977d65c-3824-4f20-8fa5-fdc1f53c8b19 00:08:04.182 Node size: 16384 00:08:04.182 Sector size: 4096 00:08:04.182 Filesystem size: 510.00MiB 00:08:04.182 Block group profiles: 00:08:04.182 Data: single 8.00MiB 00:08:04.182 Metadata: DUP 32.00MiB 00:08:04.182 System: DUP 8.00MiB 00:08:04.182 SSD detected: yes 00:08:04.182 Zoned device: no 00:08:04.182 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:04.182 Runtime features: free-space-tree 00:08:04.182 Checksum: crc32c 00:08:04.182 Number of devices: 1 00:08:04.182 Devices: 00:08:04.182 ID SIZE PATH 00:08:04.182 1 510.00MiB /dev/nvme0n1p1 00:08:04.182 00:08:04.182 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:04.182 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:04.440 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:04.440 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:04.440 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:04.440 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:04.440 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:04.440 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:04.699 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1030352 00:08:04.699 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:04.699 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:04.699 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:04.699 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:04.699 00:08:04.699 real 0m0.869s 00:08:04.699 user 0m0.017s 00:08:04.699 sys 0m0.123s 00:08:04.699 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.699 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:04.699 ************************************ 00:08:04.699 END TEST filesystem_btrfs 00:08:04.699 ************************************ 00:08:04.699 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:04.699 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:04.699 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:04.699 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.699 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.699 ************************************ 00:08:04.699 START TEST filesystem_xfs 00:08:04.699 ************************************ 00:08:04.700 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:04.700 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:04.700 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:04.700 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:04.700 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:04.700 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:04.700 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:04.700 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:04.700 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:04.700 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:04.700 00:53:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:04.700 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:04.700 = sectsz=512 attr=2, projid32bit=1 00:08:04.700 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:04.700 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:04.700 data = bsize=4096 blocks=130560, imaxpct=25 00:08:04.700 = sunit=0 swidth=0 blks 00:08:04.700 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:04.700 log =internal log bsize=4096 blocks=16384, version=2 00:08:04.700 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:04.700 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:05.667 Discarding blocks...Done. 00:08:05.667 00:53:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:05.667 00:53:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1030352 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:07.583 00:08:07.583 real 0m2.896s 00:08:07.583 user 0m0.023s 00:08:07.583 sys 0m0.049s 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:07.583 ************************************ 00:08:07.583 END TEST filesystem_xfs 00:08:07.583 ************************************ 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:07.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1030352 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1030352 ']' 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1030352 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:07.583 00:53:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1030352 00:08:07.842 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:07.842 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:07.842 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1030352' 00:08:07.842 killing process with pid 1030352 00:08:07.842 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1030352 00:08:07.842 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1030352 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:08.101 00:08:08.101 real 0m9.896s 00:08:08.101 user 0m37.790s 00:08:08.101 sys 0m1.678s 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.101 ************************************ 00:08:08.101 END TEST nvmf_filesystem_no_in_capsule 00:08:08.101 ************************************ 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.101 ************************************ 00:08:08.101 START TEST nvmf_filesystem_in_capsule 00:08:08.101 ************************************ 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1031769 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1031769 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1031769 ']' 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.101 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.360 [2024-07-14 00:53:57.540576] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:08.360 [2024-07-14 00:53:57.540663] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.360 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.360 [2024-07-14 00:53:57.612232] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.360 [2024-07-14 00:53:57.708636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.360 [2024-07-14 00:53:57.708702] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.360 [2024-07-14 00:53:57.708719] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.360 [2024-07-14 00:53:57.708732] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.360 [2024-07-14 00:53:57.708744] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.360 [2024-07-14 00:53:57.708802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.360 [2024-07-14 00:53:57.708858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.360 [2024-07-14 00:53:57.708940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.360 [2024-07-14 00:53:57.708937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.620 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.620 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:08.620 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:08.620 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:08.620 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.620 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.620 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:08.620 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:08.620 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.620 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.620 [2024-07-14 00:53:57.862803] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.620 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.620 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:08.620 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.620 00:53:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.620 Malloc1 00:08:08.620 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.620 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:08.620 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.620 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.880 [2024-07-14 00:53:58.049288] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:08.880 { 00:08:08.880 "name": "Malloc1", 00:08:08.880 "aliases": [ 00:08:08.880 "5b242f03-819b-461f-8187-7e0c475ff444" 00:08:08.880 ], 00:08:08.880 "product_name": "Malloc disk", 00:08:08.880 "block_size": 512, 00:08:08.880 "num_blocks": 1048576, 00:08:08.880 "uuid": "5b242f03-819b-461f-8187-7e0c475ff444", 00:08:08.880 "assigned_rate_limits": { 00:08:08.880 "rw_ios_per_sec": 0, 00:08:08.880 "rw_mbytes_per_sec": 0, 00:08:08.880 "r_mbytes_per_sec": 0, 00:08:08.880 "w_mbytes_per_sec": 0 00:08:08.880 }, 00:08:08.880 "claimed": true, 00:08:08.880 "claim_type": "exclusive_write", 00:08:08.880 "zoned": false, 00:08:08.880 "supported_io_types": { 00:08:08.880 "read": true, 00:08:08.880 "write": true, 00:08:08.880 "unmap": true, 00:08:08.880 "flush": true, 00:08:08.880 "reset": true, 00:08:08.880 "nvme_admin": false, 00:08:08.880 "nvme_io": false, 00:08:08.880 "nvme_io_md": false, 00:08:08.880 "write_zeroes": true, 00:08:08.880 "zcopy": true, 00:08:08.880 "get_zone_info": false, 00:08:08.880 "zone_management": false, 00:08:08.880 "zone_append": false, 00:08:08.880 "compare": false, 00:08:08.880 "compare_and_write": false, 00:08:08.880 "abort": true, 00:08:08.880 "seek_hole": false, 00:08:08.880 "seek_data": false, 00:08:08.880 "copy": true, 00:08:08.880 "nvme_iov_md": false 00:08:08.880 }, 00:08:08.880 "memory_domains": [ 00:08:08.880 { 00:08:08.880 "dma_device_id": "system", 00:08:08.880 "dma_device_type": 1 00:08:08.880 }, 00:08:08.880 { 00:08:08.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.880 "dma_device_type": 2 00:08:08.880 } 00:08:08.880 ], 00:08:08.880 "driver_specific": {} 00:08:08.880 } 00:08:08.880 ]' 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:08.880 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:09.449 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:09.449 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:09.449 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:09.449 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:09.449 00:53:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:11.985 00:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:11.985 00:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:11.985 00:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:11.985 00:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:11.985 00:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:11.985 00:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:11.985 00:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:11.985 00:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:11.985 00:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:11.985 00:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:11.985 00:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:11.985 00:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:11.985 00:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:11.985 00:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:11.985 00:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:11.985 00:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:11.985 00:54:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:11.985 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:12.244 00:54:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:13.181 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:13.181 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:13.181 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:13.181 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.181 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:13.181 ************************************ 00:08:13.181 START TEST filesystem_in_capsule_ext4 00:08:13.181 ************************************ 00:08:13.181 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:13.181 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:13.181 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:13.181 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:13.181 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:13.181 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:13.181 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:13.181 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:13.181 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:13.181 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:13.181 00:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:13.181 mke2fs 1.46.5 (30-Dec-2021) 00:08:13.181 Discarding device blocks: 0/522240 done 00:08:13.181 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:13.181 Filesystem UUID: bc294f79-2832-4655-bd5c-7e1e152e4a52 00:08:13.181 Superblock backups stored on blocks: 00:08:13.181 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:13.181 00:08:13.181 Allocating group tables: 0/64 done 00:08:13.441 Writing inode tables: 0/64 done 00:08:13.441 Creating journal (8192 blocks): done 00:08:14.264 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:08:14.264 00:08:14.264 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:14.264 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:14.522 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:14.522 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:14.522 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:14.522 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:14.522 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:14.522 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:14.522 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1031769 00:08:14.522 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:14.522 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:14.522 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:14.522 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:14.522 00:08:14.522 real 0m1.471s 00:08:14.522 user 0m0.021s 00:08:14.522 sys 0m0.052s 00:08:14.522 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.522 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:14.522 ************************************ 00:08:14.522 END TEST filesystem_in_capsule_ext4 00:08:14.522 ************************************ 00:08:14.782 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:14.782 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:14.782 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:14.782 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.782 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.782 ************************************ 00:08:14.782 START TEST filesystem_in_capsule_btrfs 00:08:14.782 ************************************ 00:08:14.782 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:14.782 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:14.782 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:14.782 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:14.782 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:14.782 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:14.782 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:14.782 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:14.782 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:14.782 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:14.782 00:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:15.041 btrfs-progs v6.6.2 00:08:15.041 See https://btrfs.readthedocs.io for more information. 00:08:15.041 00:08:15.041 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:15.042 NOTE: several default settings have changed in version 5.15, please make sure 00:08:15.042 this does not affect your deployments: 00:08:15.042 - DUP for metadata (-m dup) 00:08:15.042 - enabled no-holes (-O no-holes) 00:08:15.042 - enabled free-space-tree (-R free-space-tree) 00:08:15.042 00:08:15.042 Label: (null) 00:08:15.042 UUID: 71512926-654a-4a68-864b-e97df89d6809 00:08:15.042 Node size: 16384 00:08:15.042 Sector size: 4096 00:08:15.042 Filesystem size: 510.00MiB 00:08:15.042 Block group profiles: 00:08:15.042 Data: single 8.00MiB 00:08:15.042 Metadata: DUP 32.00MiB 00:08:15.042 System: DUP 8.00MiB 00:08:15.042 SSD detected: yes 00:08:15.042 Zoned device: no 00:08:15.042 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:15.042 Runtime features: free-space-tree 00:08:15.042 Checksum: crc32c 00:08:15.042 Number of devices: 1 00:08:15.042 Devices: 00:08:15.042 ID SIZE PATH 00:08:15.042 1 510.00MiB /dev/nvme0n1p1 00:08:15.042 00:08:15.042 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:15.042 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:15.610 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:15.610 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:15.610 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:15.610 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:15.610 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:15.610 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1031769 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:15.611 00:08:15.611 real 0m0.835s 00:08:15.611 user 0m0.014s 00:08:15.611 sys 0m0.122s 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:15.611 ************************************ 00:08:15.611 END TEST filesystem_in_capsule_btrfs 00:08:15.611 ************************************ 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:15.611 ************************************ 00:08:15.611 START TEST filesystem_in_capsule_xfs 00:08:15.611 ************************************ 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:15.611 00:54:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:15.611 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:15.611 = sectsz=512 attr=2, projid32bit=1 00:08:15.611 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:15.611 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:15.611 data = bsize=4096 blocks=130560, imaxpct=25 00:08:15.611 = sunit=0 swidth=0 blks 00:08:15.611 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:15.611 log =internal log bsize=4096 blocks=16384, version=2 00:08:15.611 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:15.611 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:16.548 Discarding blocks...Done. 00:08:16.548 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:16.548 00:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:18.451 00:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:18.451 00:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:18.451 00:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:18.451 00:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:18.451 00:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:18.451 00:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:18.451 00:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1031769 00:08:18.451 00:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:18.451 00:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:18.451 00:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:18.451 00:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:18.451 00:08:18.451 real 0m2.965s 00:08:18.451 user 0m0.020s 00:08:18.451 sys 0m0.047s 00:08:18.451 00:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.451 00:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:18.451 ************************************ 00:08:18.451 END TEST filesystem_in_capsule_xfs 00:08:18.451 ************************************ 00:08:18.451 00:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:18.451 00:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:18.709 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:18.709 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:18.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1031769 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1031769 ']' 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1031769 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1031769 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1031769' 00:08:18.967 killing process with pid 1031769 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1031769 00:08:18.967 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1031769 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:19.533 00:08:19.533 real 0m11.201s 00:08:19.533 user 0m42.908s 00:08:19.533 sys 0m1.753s 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:19.533 ************************************ 00:08:19.533 END TEST nvmf_filesystem_in_capsule 00:08:19.533 ************************************ 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:19.533 rmmod nvme_tcp 00:08:19.533 rmmod nvme_fabrics 00:08:19.533 rmmod nvme_keyring 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.533 00:54:08 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.482 00:54:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:21.482 00:08:21.482 real 0m25.509s 00:08:21.482 user 1m21.549s 00:08:21.482 sys 0m4.992s 00:08:21.482 00:54:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.482 00:54:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.482 ************************************ 00:08:21.482 END TEST nvmf_filesystem 00:08:21.482 ************************************ 00:08:21.482 00:54:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:21.482 00:54:10 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:21.482 00:54:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:21.482 00:54:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.482 00:54:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:21.743 ************************************ 00:08:21.743 START TEST nvmf_target_discovery 00:08:21.743 ************************************ 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:21.743 * Looking for test storage... 00:08:21.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:21.743 00:54:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:23.647 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:23.647 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:23.647 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:23.647 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.647 00:54:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.647 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.647 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.647 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:23.647 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:23.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:08:23.906 00:08:23.906 --- 10.0.0.2 ping statistics --- 00:08:23.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.906 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:08:23.906 00:08:23.906 --- 10.0.0.1 ping statistics --- 00:08:23.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.906 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1035789 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1035789 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1035789 ']' 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.906 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.906 [2024-07-14 00:54:13.196070] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:23.906 [2024-07-14 00:54:13.196168] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.906 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.906 [2024-07-14 00:54:13.270849] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.163 [2024-07-14 00:54:13.366365] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.163 [2024-07-14 00:54:13.366419] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.163 [2024-07-14 00:54:13.366436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.164 [2024-07-14 00:54:13.366451] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.164 [2024-07-14 00:54:13.366463] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.164 [2024-07-14 00:54:13.366549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.164 [2024-07-14 00:54:13.366610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.164 [2024-07-14 00:54:13.366659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.164 [2024-07-14 00:54:13.366662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.164 [2024-07-14 00:54:13.531934] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.164 Null1 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.164 [2024-07-14 00:54:13.572230] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.164 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.421 Null2 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.421 Null3 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.421 Null4 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.421 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:24.422 00:08:24.422 Discovery Log Number of Records 6, Generation counter 6 00:08:24.422 =====Discovery Log Entry 0====== 00:08:24.422 trtype: tcp 00:08:24.422 adrfam: ipv4 00:08:24.422 subtype: current discovery subsystem 00:08:24.422 treq: not required 00:08:24.422 portid: 0 00:08:24.422 trsvcid: 4420 00:08:24.422 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:24.422 traddr: 10.0.0.2 00:08:24.422 eflags: explicit discovery connections, duplicate discovery information 00:08:24.422 sectype: none 00:08:24.422 =====Discovery Log Entry 1====== 00:08:24.422 trtype: tcp 00:08:24.422 adrfam: ipv4 00:08:24.422 subtype: nvme subsystem 00:08:24.422 treq: not required 00:08:24.422 portid: 0 00:08:24.422 trsvcid: 4420 00:08:24.422 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:24.422 traddr: 10.0.0.2 00:08:24.422 eflags: none 00:08:24.422 sectype: none 00:08:24.422 =====Discovery Log Entry 2====== 00:08:24.422 trtype: tcp 00:08:24.422 adrfam: ipv4 00:08:24.422 subtype: nvme subsystem 00:08:24.422 treq: not required 00:08:24.422 portid: 0 00:08:24.422 trsvcid: 4420 00:08:24.422 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:24.422 traddr: 10.0.0.2 00:08:24.422 eflags: none 00:08:24.422 sectype: none 00:08:24.422 =====Discovery Log Entry 3====== 00:08:24.422 trtype: tcp 00:08:24.422 adrfam: ipv4 00:08:24.422 subtype: nvme subsystem 00:08:24.422 treq: not required 00:08:24.422 portid: 0 00:08:24.422 trsvcid: 4420 00:08:24.422 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:24.422 traddr: 10.0.0.2 00:08:24.422 eflags: none 00:08:24.422 sectype: none 00:08:24.422 =====Discovery Log Entry 4====== 00:08:24.422 trtype: tcp 00:08:24.422 adrfam: ipv4 00:08:24.422 subtype: nvme subsystem 00:08:24.422 treq: not required 00:08:24.422 portid: 0 00:08:24.422 trsvcid: 4420 00:08:24.422 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:24.422 traddr: 10.0.0.2 00:08:24.422 eflags: none 00:08:24.422 sectype: none 00:08:24.422 =====Discovery Log Entry 5====== 00:08:24.422 trtype: tcp 00:08:24.422 adrfam: ipv4 00:08:24.422 subtype: discovery subsystem referral 00:08:24.422 treq: not required 00:08:24.422 portid: 0 00:08:24.422 trsvcid: 4430 00:08:24.422 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:24.422 traddr: 10.0.0.2 00:08:24.422 eflags: none 00:08:24.422 sectype: none 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:24.422 Perform nvmf subsystem discovery via RPC 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.422 [ 00:08:24.422 { 00:08:24.422 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:24.422 "subtype": "Discovery", 00:08:24.422 "listen_addresses": [ 00:08:24.422 { 00:08:24.422 "trtype": "TCP", 00:08:24.422 "adrfam": "IPv4", 00:08:24.422 "traddr": "10.0.0.2", 00:08:24.422 "trsvcid": "4420" 00:08:24.422 } 00:08:24.422 ], 00:08:24.422 "allow_any_host": true, 00:08:24.422 "hosts": [] 00:08:24.422 }, 00:08:24.422 { 00:08:24.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.422 "subtype": "NVMe", 00:08:24.422 "listen_addresses": [ 00:08:24.422 { 00:08:24.422 "trtype": "TCP", 00:08:24.422 "adrfam": "IPv4", 00:08:24.422 "traddr": "10.0.0.2", 00:08:24.422 "trsvcid": "4420" 00:08:24.422 } 00:08:24.422 ], 00:08:24.422 "allow_any_host": true, 00:08:24.422 "hosts": [], 00:08:24.422 "serial_number": "SPDK00000000000001", 00:08:24.422 "model_number": "SPDK bdev Controller", 00:08:24.422 "max_namespaces": 32, 00:08:24.422 "min_cntlid": 1, 00:08:24.422 "max_cntlid": 65519, 00:08:24.422 "namespaces": [ 00:08:24.422 { 00:08:24.422 "nsid": 1, 00:08:24.422 "bdev_name": "Null1", 00:08:24.422 "name": "Null1", 00:08:24.422 "nguid": "DF244D10ED534BA8AEDB6A528EE41264", 00:08:24.422 "uuid": "df244d10-ed53-4ba8-aedb-6a528ee41264" 00:08:24.422 } 00:08:24.422 ] 00:08:24.422 }, 00:08:24.422 { 00:08:24.422 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:24.422 "subtype": "NVMe", 00:08:24.422 "listen_addresses": [ 00:08:24.422 { 00:08:24.422 "trtype": "TCP", 00:08:24.422 "adrfam": "IPv4", 00:08:24.422 "traddr": "10.0.0.2", 00:08:24.422 "trsvcid": "4420" 00:08:24.422 } 00:08:24.422 ], 00:08:24.422 "allow_any_host": true, 00:08:24.422 "hosts": [], 00:08:24.422 "serial_number": "SPDK00000000000002", 00:08:24.422 "model_number": "SPDK bdev Controller", 00:08:24.422 "max_namespaces": 32, 00:08:24.422 "min_cntlid": 1, 00:08:24.422 "max_cntlid": 65519, 00:08:24.422 "namespaces": [ 00:08:24.422 { 00:08:24.422 "nsid": 1, 00:08:24.422 "bdev_name": "Null2", 00:08:24.422 "name": "Null2", 00:08:24.422 "nguid": "1A64CB36E2BC40CDBB311BAA1E8CC590", 00:08:24.422 "uuid": "1a64cb36-e2bc-40cd-bb31-1baa1e8cc590" 00:08:24.422 } 00:08:24.422 ] 00:08:24.422 }, 00:08:24.422 { 00:08:24.422 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:24.422 "subtype": "NVMe", 00:08:24.422 "listen_addresses": [ 00:08:24.422 { 00:08:24.422 "trtype": "TCP", 00:08:24.422 "adrfam": "IPv4", 00:08:24.422 "traddr": "10.0.0.2", 00:08:24.422 "trsvcid": "4420" 00:08:24.422 } 00:08:24.422 ], 00:08:24.422 "allow_any_host": true, 00:08:24.422 "hosts": [], 00:08:24.422 "serial_number": "SPDK00000000000003", 00:08:24.422 "model_number": "SPDK bdev Controller", 00:08:24.422 "max_namespaces": 32, 00:08:24.422 "min_cntlid": 1, 00:08:24.422 "max_cntlid": 65519, 00:08:24.422 "namespaces": [ 00:08:24.422 { 00:08:24.422 "nsid": 1, 00:08:24.422 "bdev_name": "Null3", 00:08:24.422 "name": "Null3", 00:08:24.422 "nguid": "2A0AAE40EF384AF1B00182C1ABD09294", 00:08:24.422 "uuid": "2a0aae40-ef38-4af1-b001-82c1abd09294" 00:08:24.422 } 00:08:24.422 ] 00:08:24.422 }, 00:08:24.422 { 00:08:24.422 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:24.422 "subtype": "NVMe", 00:08:24.422 "listen_addresses": [ 00:08:24.422 { 00:08:24.422 "trtype": "TCP", 00:08:24.422 "adrfam": "IPv4", 00:08:24.422 "traddr": "10.0.0.2", 00:08:24.422 "trsvcid": "4420" 00:08:24.422 } 00:08:24.422 ], 00:08:24.422 "allow_any_host": true, 00:08:24.422 "hosts": [], 00:08:24.422 "serial_number": "SPDK00000000000004", 00:08:24.422 "model_number": "SPDK bdev Controller", 00:08:24.422 "max_namespaces": 32, 00:08:24.422 "min_cntlid": 1, 00:08:24.422 "max_cntlid": 65519, 00:08:24.422 "namespaces": [ 00:08:24.422 { 00:08:24.422 "nsid": 1, 00:08:24.422 "bdev_name": "Null4", 00:08:24.422 "name": "Null4", 00:08:24.422 "nguid": "36058F2BC5E7474E801FEDA7558AB989", 00:08:24.422 "uuid": "36058f2b-c5e7-474e-801f-eda7558ab989" 00:08:24.422 } 00:08:24.422 ] 00:08:24.422 } 00:08:24.422 ] 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.422 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:24.680 rmmod nvme_tcp 00:08:24.680 rmmod nvme_fabrics 00:08:24.680 rmmod nvme_keyring 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1035789 ']' 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1035789 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1035789 ']' 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1035789 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1035789 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1035789' 00:08:24.680 killing process with pid 1035789 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1035789 00:08:24.680 00:54:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1035789 00:08:24.938 00:54:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:24.938 00:54:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:24.938 00:54:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:24.938 00:54:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:24.938 00:54:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:24.938 00:54:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.938 00:54:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.938 00:54:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.840 00:54:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:26.840 00:08:26.840 real 0m5.389s 00:08:26.840 user 0m4.092s 00:08:26.840 sys 0m1.880s 00:08:26.840 00:54:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.840 00:54:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.840 ************************************ 00:08:26.840 END TEST nvmf_target_discovery 00:08:26.840 ************************************ 00:08:27.098 00:54:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:27.098 00:54:16 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:27.098 00:54:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:27.098 00:54:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.098 00:54:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:27.098 ************************************ 00:08:27.098 START TEST nvmf_referrals 00:08:27.098 ************************************ 00:08:27.098 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:27.098 * Looking for test storage... 00:08:27.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:27.098 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.098 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:27.098 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.098 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.098 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.098 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.098 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.098 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.098 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.098 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.098 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.098 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.098 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:27.099 00:54:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.012 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.012 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:29.012 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:29.012 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:29.013 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:29.013 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:29.013 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:29.013 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.013 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.272 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.272 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.272 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:29.272 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.272 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.272 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.272 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:29.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:08:29.272 00:08:29.272 --- 10.0.0.2 ping statistics --- 00:08:29.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.272 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:08:29.272 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:08:29.272 00:08:29.272 --- 10.0.0.1 ping statistics --- 00:08:29.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.272 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1037827 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1037827 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1037827 ']' 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:29.273 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.273 [2024-07-14 00:54:18.584481] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:29.273 [2024-07-14 00:54:18.584566] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.273 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.273 [2024-07-14 00:54:18.655285] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:29.533 [2024-07-14 00:54:18.752805] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.533 [2024-07-14 00:54:18.752884] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.533 [2024-07-14 00:54:18.752903] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.533 [2024-07-14 00:54:18.752917] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.533 [2024-07-14 00:54:18.752928] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.533 [2024-07-14 00:54:18.755891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.533 [2024-07-14 00:54:18.755926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.533 [2024-07-14 00:54:18.755989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.533 [2024-07-14 00:54:18.755993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.533 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:29.533 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:29.533 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:29.533 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:29.533 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.533 00:54:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.533 00:54:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:29.533 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.533 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.533 [2024-07-14 00:54:18.922775] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.533 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.533 00:54:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:29.533 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.533 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.533 [2024-07-14 00:54:18.935028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:29.533 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.533 00:54:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:29.533 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.533 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.533 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.794 00:54:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:29.794 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.794 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.794 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.794 00:54:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:29.794 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.794 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.794 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.794 00:54:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:29.794 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.794 00:54:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:29.794 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.794 00:54:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.794 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:29.794 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:29.794 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:29.794 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:29.794 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:29.794 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.794 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:29.794 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:29.794 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.794 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:29.794 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:29.794 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:29.794 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:29.794 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:29.794 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:29.794 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:29.794 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:30.054 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:30.314 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:30.314 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:30.314 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:30.314 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:30.314 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:30.314 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.314 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:30.574 00:54:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:30.833 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:31.092 00:54:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:31.092 rmmod nvme_tcp 00:08:31.092 rmmod nvme_fabrics 00:08:31.352 rmmod nvme_keyring 00:08:31.352 00:54:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:31.352 00:54:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:31.352 00:54:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:31.352 00:54:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1037827 ']' 00:08:31.352 00:54:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1037827 00:08:31.352 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1037827 ']' 00:08:31.352 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1037827 00:08:31.352 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:31.352 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:31.353 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1037827 00:08:31.353 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:31.353 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:31.353 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1037827' 00:08:31.353 killing process with pid 1037827 00:08:31.353 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1037827 00:08:31.353 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1037827 00:08:31.612 00:54:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:31.612 00:54:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:31.612 00:54:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:31.612 00:54:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.612 00:54:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:31.612 00:54:20 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.612 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.612 00:54:20 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.521 00:54:22 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:33.521 00:08:33.521 real 0m6.536s 00:08:33.521 user 0m9.381s 00:08:33.521 sys 0m2.119s 00:08:33.521 00:54:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.521 00:54:22 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.521 ************************************ 00:08:33.521 END TEST nvmf_referrals 00:08:33.521 ************************************ 00:08:33.521 00:54:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:33.521 00:54:22 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:33.521 00:54:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:33.521 00:54:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.521 00:54:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:33.521 ************************************ 00:08:33.521 START TEST nvmf_connect_disconnect 00:08:33.521 ************************************ 00:08:33.521 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:33.780 * Looking for test storage... 00:08:33.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:33.780 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:33.781 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:33.781 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.781 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:33.781 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:33.781 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:33.781 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.781 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.781 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.781 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:33.781 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:33.781 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:33.781 00:54:22 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:35.687 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:35.687 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:35.687 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:35.687 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:35.688 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.688 00:54:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:35.688 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:35.688 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:35.688 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:35.688 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:35.688 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:35.688 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:35.688 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:35.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:08:35.947 00:08:35.947 --- 10.0.0.2 ping statistics --- 00:08:35.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.947 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:35.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:08:35.947 00:08:35.947 --- 10.0.0.1 ping statistics --- 00:08:35.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.947 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1040118 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1040118 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1040118 ']' 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:35.947 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:35.947 [2024-07-14 00:54:25.214528] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:35.947 [2024-07-14 00:54:25.214615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.947 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.947 [2024-07-14 00:54:25.284548] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.206 [2024-07-14 00:54:25.379091] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.206 [2024-07-14 00:54:25.379153] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.206 [2024-07-14 00:54:25.379169] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.206 [2024-07-14 00:54:25.379183] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.206 [2024-07-14 00:54:25.379194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.206 [2024-07-14 00:54:25.379285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.206 [2024-07-14 00:54:25.379339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.206 [2024-07-14 00:54:25.379393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.206 [2024-07-14 00:54:25.379395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.206 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:36.206 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:36.206 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:36.206 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:36.206 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:36.206 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.206 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:36.206 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.206 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:36.207 [2024-07-14 00:54:25.548000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:36.207 [2024-07-14 00:54:25.600170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:36.207 00:54:25 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:38.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:27.331 rmmod nvme_tcp 00:12:27.331 rmmod nvme_fabrics 00:12:27.331 rmmod nvme_keyring 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1040118 ']' 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1040118 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1040118 ']' 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1040118 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1040118 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1040118' 00:12:27.331 killing process with pid 1040118 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1040118 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1040118 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.331 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:27.332 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.332 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.332 00:58:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.239 00:58:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:29.239 00:12:29.239 real 3m55.686s 00:12:29.239 user 14m57.227s 00:12:29.239 sys 0m35.001s 00:12:29.239 00:58:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.239 00:58:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:29.239 ************************************ 00:12:29.239 END TEST nvmf_connect_disconnect 00:12:29.239 ************************************ 00:12:29.239 00:58:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:29.239 00:58:18 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:29.239 00:58:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:29.239 00:58:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.239 00:58:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:29.239 ************************************ 00:12:29.239 START TEST nvmf_multitarget 00:12:29.239 ************************************ 00:12:29.239 00:58:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:29.498 * Looking for test storage... 00:12:29.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.498 00:58:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.498 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:29.498 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.498 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.498 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.498 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:29.499 00:58:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:31.404 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.404 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:31.405 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:31.405 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:31.405 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:31.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:12:31.405 00:12:31.405 --- 10.0.0.2 ping statistics --- 00:12:31.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.405 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:12:31.405 00:12:31.405 --- 10.0.0.1 ping statistics --- 00:12:31.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.405 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1071196 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1071196 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1071196 ']' 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.405 00:58:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:31.663 [2024-07-14 00:58:20.832582] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:31.663 [2024-07-14 00:58:20.832668] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.663 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.664 [2024-07-14 00:58:20.897328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.664 [2024-07-14 00:58:20.990763] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.664 [2024-07-14 00:58:20.990823] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.664 [2024-07-14 00:58:20.990840] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.664 [2024-07-14 00:58:20.990855] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.664 [2024-07-14 00:58:20.990881] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.664 [2024-07-14 00:58:20.990978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.664 [2024-07-14 00:58:20.991008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.664 [2024-07-14 00:58:20.991065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.664 [2024-07-14 00:58:20.991067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.922 00:58:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.922 00:58:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:12:31.922 00:58:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:31.922 00:58:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:31.922 00:58:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:31.922 00:58:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.922 00:58:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:31.922 00:58:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:31.922 00:58:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:31.922 00:58:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:31.922 00:58:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:32.180 "nvmf_tgt_1" 00:12:32.180 00:58:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:32.180 "nvmf_tgt_2" 00:12:32.180 00:58:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:32.180 00:58:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:32.180 00:58:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:32.180 00:58:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:32.438 true 00:12:32.438 00:58:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:32.438 true 00:12:32.438 00:58:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:32.438 00:58:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:32.697 rmmod nvme_tcp 00:12:32.697 rmmod nvme_fabrics 00:12:32.697 rmmod nvme_keyring 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1071196 ']' 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1071196 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1071196 ']' 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1071196 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1071196 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1071196' 00:12:32.697 killing process with pid 1071196 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1071196 00:12:32.697 00:58:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1071196 00:12:32.954 00:58:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:32.954 00:58:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:32.954 00:58:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:32.954 00:58:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:32.954 00:58:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:32.954 00:58:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.954 00:58:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.954 00:58:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.860 00:58:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:34.860 00:12:34.860 real 0m5.628s 00:12:34.860 user 0m6.250s 00:12:34.860 sys 0m1.888s 00:12:34.860 00:58:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:34.860 00:58:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:34.860 ************************************ 00:12:34.860 END TEST nvmf_multitarget 00:12:34.860 ************************************ 00:12:35.118 00:58:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:35.118 00:58:24 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:35.118 00:58:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:35.118 00:58:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.118 00:58:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:35.118 ************************************ 00:12:35.118 START TEST nvmf_rpc 00:12:35.118 ************************************ 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:35.118 * Looking for test storage... 00:12:35.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.118 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:35.119 00:58:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:37.022 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:37.022 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.022 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:37.281 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:37.281 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:37.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:12:37.281 00:12:37.281 --- 10.0.0.2 ping statistics --- 00:12:37.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.281 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:37.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:12:37.281 00:12:37.281 --- 10.0.0.1 ping statistics --- 00:12:37.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.281 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1073294 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1073294 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1073294 ']' 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:37.281 00:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.281 [2024-07-14 00:58:26.654694] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:37.281 [2024-07-14 00:58:26.654772] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.281 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.539 [2024-07-14 00:58:26.726802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.539 [2024-07-14 00:58:26.823353] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.539 [2024-07-14 00:58:26.823425] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.539 [2024-07-14 00:58:26.823442] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.539 [2024-07-14 00:58:26.823462] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.539 [2024-07-14 00:58:26.823474] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.539 [2024-07-14 00:58:26.823552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.539 [2024-07-14 00:58:26.823616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.539 [2024-07-14 00:58:26.823641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.539 [2024-07-14 00:58:26.823643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.539 00:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:37.539 00:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:37.539 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:37.539 00:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:37.539 00:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.799 00:58:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.799 00:58:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:37.799 00:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.799 00:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.799 00:58:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.799 00:58:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:37.799 "tick_rate": 2700000000, 00:12:37.799 "poll_groups": [ 00:12:37.799 { 00:12:37.799 "name": "nvmf_tgt_poll_group_000", 00:12:37.799 "admin_qpairs": 0, 00:12:37.799 "io_qpairs": 0, 00:12:37.799 "current_admin_qpairs": 0, 00:12:37.799 "current_io_qpairs": 0, 00:12:37.799 "pending_bdev_io": 0, 00:12:37.799 "completed_nvme_io": 0, 00:12:37.799 "transports": [] 00:12:37.799 }, 00:12:37.799 { 00:12:37.799 "name": "nvmf_tgt_poll_group_001", 00:12:37.799 "admin_qpairs": 0, 00:12:37.799 "io_qpairs": 0, 00:12:37.799 "current_admin_qpairs": 0, 00:12:37.799 "current_io_qpairs": 0, 00:12:37.799 "pending_bdev_io": 0, 00:12:37.799 "completed_nvme_io": 0, 00:12:37.799 "transports": [] 00:12:37.799 }, 00:12:37.799 { 00:12:37.799 "name": "nvmf_tgt_poll_group_002", 00:12:37.799 "admin_qpairs": 0, 00:12:37.799 "io_qpairs": 0, 00:12:37.799 "current_admin_qpairs": 0, 00:12:37.799 "current_io_qpairs": 0, 00:12:37.799 "pending_bdev_io": 0, 00:12:37.799 "completed_nvme_io": 0, 00:12:37.799 "transports": [] 00:12:37.799 }, 00:12:37.799 { 00:12:37.799 "name": "nvmf_tgt_poll_group_003", 00:12:37.799 "admin_qpairs": 0, 00:12:37.799 "io_qpairs": 0, 00:12:37.799 "current_admin_qpairs": 0, 00:12:37.799 "current_io_qpairs": 0, 00:12:37.799 "pending_bdev_io": 0, 00:12:37.799 "completed_nvme_io": 0, 00:12:37.799 "transports": [] 00:12:37.799 } 00:12:37.799 ] 00:12:37.799 }' 00:12:37.799 00:58:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:37.799 00:58:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:37.799 00:58:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:37.799 00:58:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.799 [2024-07-14 00:58:27.076134] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:37.799 "tick_rate": 2700000000, 00:12:37.799 "poll_groups": [ 00:12:37.799 { 00:12:37.799 "name": "nvmf_tgt_poll_group_000", 00:12:37.799 "admin_qpairs": 0, 00:12:37.799 "io_qpairs": 0, 00:12:37.799 "current_admin_qpairs": 0, 00:12:37.799 "current_io_qpairs": 0, 00:12:37.799 "pending_bdev_io": 0, 00:12:37.799 "completed_nvme_io": 0, 00:12:37.799 "transports": [ 00:12:37.799 { 00:12:37.799 "trtype": "TCP" 00:12:37.799 } 00:12:37.799 ] 00:12:37.799 }, 00:12:37.799 { 00:12:37.799 "name": "nvmf_tgt_poll_group_001", 00:12:37.799 "admin_qpairs": 0, 00:12:37.799 "io_qpairs": 0, 00:12:37.799 "current_admin_qpairs": 0, 00:12:37.799 "current_io_qpairs": 0, 00:12:37.799 "pending_bdev_io": 0, 00:12:37.799 "completed_nvme_io": 0, 00:12:37.799 "transports": [ 00:12:37.799 { 00:12:37.799 "trtype": "TCP" 00:12:37.799 } 00:12:37.799 ] 00:12:37.799 }, 00:12:37.799 { 00:12:37.799 "name": "nvmf_tgt_poll_group_002", 00:12:37.799 "admin_qpairs": 0, 00:12:37.799 "io_qpairs": 0, 00:12:37.799 "current_admin_qpairs": 0, 00:12:37.799 "current_io_qpairs": 0, 00:12:37.799 "pending_bdev_io": 0, 00:12:37.799 "completed_nvme_io": 0, 00:12:37.799 "transports": [ 00:12:37.799 { 00:12:37.799 "trtype": "TCP" 00:12:37.799 } 00:12:37.799 ] 00:12:37.799 }, 00:12:37.799 { 00:12:37.799 "name": "nvmf_tgt_poll_group_003", 00:12:37.799 "admin_qpairs": 0, 00:12:37.799 "io_qpairs": 0, 00:12:37.799 "current_admin_qpairs": 0, 00:12:37.799 "current_io_qpairs": 0, 00:12:37.799 "pending_bdev_io": 0, 00:12:37.799 "completed_nvme_io": 0, 00:12:37.799 "transports": [ 00:12:37.799 { 00:12:37.799 "trtype": "TCP" 00:12:37.799 } 00:12:37.799 ] 00:12:37.799 } 00:12:37.799 ] 00:12:37.799 }' 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.799 Malloc1 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.799 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.060 [2024-07-14 00:58:27.238077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:38.060 [2024-07-14 00:58:27.260719] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:38.060 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:38.060 could not add new controller: failed to write to nvme-fabrics device 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.060 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.629 00:58:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.629 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:38.629 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.629 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:38.629 00:58:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:40.604 00:58:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:40.604 00:58:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:40.604 00:58:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.605 00:58:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:40.605 00:58:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.605 00:58:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:40.605 00:58:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.863 [2024-07-14 00:58:30.079659] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:40.863 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:40.863 could not add new controller: failed to write to nvme-fabrics device 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.863 00:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.432 00:58:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.432 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:41.432 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.432 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:41.432 00:58:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.974 [2024-07-14 00:58:32.917167] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.974 00:58:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.233 00:58:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.233 00:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:44.233 00:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.234 00:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:44.234 00:58:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.773 [2024-07-14 00:58:35.762064] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.773 00:58:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.033 00:58:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:47.033 00:58:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:47.033 00:58:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.033 00:58:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:47.033 00:58:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.572 [2024-07-14 00:58:38.529972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.572 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.573 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.573 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.573 00:58:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.573 00:58:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.831 00:58:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.831 00:58:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:49.831 00:58:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.831 00:58:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:49.831 00:58:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:51.742 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:51.743 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:51.743 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.002 [2024-07-14 00:58:41.296374] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.002 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.571 00:58:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.571 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:52.571 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.571 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:52.571 00:58:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:55.105 00:58:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:55.105 00:58:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:55.105 00:58:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.105 00:58:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:55.105 00:58:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.105 00:58:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:55.105 00:58:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.105 00:58:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.105 00:58:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:55.105 00:58:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:55.105 00:58:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.105 00:58:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:55.105 00:58:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.105 00:58:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:55.105 00:58:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.105 00:58:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.105 00:58:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.105 [2024-07-14 00:58:44.026670] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.105 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.365 00:58:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.365 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:55.365 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.365 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:55.365 00:58:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:57.268 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:57.268 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:57.268 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.268 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:57.268 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.268 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:57.268 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 [2024-07-14 00:58:46.795986] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 [2024-07-14 00:58:46.844076] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 [2024-07-14 00:58:46.892274] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.528 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.528 [2024-07-14 00:58:46.940445] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 [2024-07-14 00:58:46.988618] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 00:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:57.834 "tick_rate": 2700000000, 00:12:57.834 "poll_groups": [ 00:12:57.834 { 00:12:57.834 "name": "nvmf_tgt_poll_group_000", 00:12:57.834 "admin_qpairs": 2, 00:12:57.834 "io_qpairs": 84, 00:12:57.834 "current_admin_qpairs": 0, 00:12:57.834 "current_io_qpairs": 0, 00:12:57.834 "pending_bdev_io": 0, 00:12:57.834 "completed_nvme_io": 231, 00:12:57.834 "transports": [ 00:12:57.834 { 00:12:57.834 "trtype": "TCP" 00:12:57.834 } 00:12:57.834 ] 00:12:57.834 }, 00:12:57.834 { 00:12:57.834 "name": "nvmf_tgt_poll_group_001", 00:12:57.834 "admin_qpairs": 2, 00:12:57.834 "io_qpairs": 84, 00:12:57.834 "current_admin_qpairs": 0, 00:12:57.834 "current_io_qpairs": 0, 00:12:57.834 "pending_bdev_io": 0, 00:12:57.834 "completed_nvme_io": 186, 00:12:57.834 "transports": [ 00:12:57.834 { 00:12:57.834 "trtype": "TCP" 00:12:57.834 } 00:12:57.834 ] 00:12:57.834 }, 00:12:57.834 { 00:12:57.834 "name": "nvmf_tgt_poll_group_002", 00:12:57.834 "admin_qpairs": 1, 00:12:57.834 "io_qpairs": 84, 00:12:57.834 "current_admin_qpairs": 0, 00:12:57.834 "current_io_qpairs": 0, 00:12:57.834 "pending_bdev_io": 0, 00:12:57.834 "completed_nvme_io": 134, 00:12:57.834 "transports": [ 00:12:57.834 { 00:12:57.834 "trtype": "TCP" 00:12:57.834 } 00:12:57.834 ] 00:12:57.834 }, 00:12:57.834 { 00:12:57.834 "name": "nvmf_tgt_poll_group_003", 00:12:57.834 "admin_qpairs": 2, 00:12:57.834 "io_qpairs": 84, 00:12:57.834 "current_admin_qpairs": 0, 00:12:57.834 "current_io_qpairs": 0, 00:12:57.834 "pending_bdev_io": 0, 00:12:57.834 "completed_nvme_io": 135, 00:12:57.834 "transports": [ 00:12:57.834 { 00:12:57.834 "trtype": "TCP" 00:12:57.834 } 00:12:57.834 ] 00:12:57.834 } 00:12:57.834 ] 00:12:57.834 }' 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:57.834 00:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:57.835 rmmod nvme_tcp 00:12:57.835 rmmod nvme_fabrics 00:12:57.835 rmmod nvme_keyring 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1073294 ']' 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1073294 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1073294 ']' 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1073294 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1073294 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1073294' 00:12:57.835 killing process with pid 1073294 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1073294 00:12:57.835 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1073294 00:12:58.095 00:58:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:58.095 00:58:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:58.095 00:58:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:58.095 00:58:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.095 00:58:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:58.095 00:58:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.095 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.095 00:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.628 00:58:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:00.628 00:13:00.628 real 0m25.203s 00:13:00.628 user 1m21.663s 00:13:00.628 sys 0m4.119s 00:13:00.628 00:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:00.628 00:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.628 ************************************ 00:13:00.628 END TEST nvmf_rpc 00:13:00.628 ************************************ 00:13:00.628 00:58:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:00.628 00:58:49 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:00.628 00:58:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:00.628 00:58:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.628 00:58:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:00.628 ************************************ 00:13:00.628 START TEST nvmf_invalid 00:13:00.628 ************************************ 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:00.629 * Looking for test storage... 00:13:00.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:00.629 00:58:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:02.527 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:02.527 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:02.527 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:02.527 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.527 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:02.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:13:02.528 00:13:02.528 --- 10.0.0.2 ping statistics --- 00:13:02.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.528 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:13:02.528 00:13:02.528 --- 10.0.0.1 ping statistics --- 00:13:02.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.528 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1077919 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1077919 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1077919 ']' 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:02.528 00:58:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:02.528 [2024-07-14 00:58:51.921200] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:02.528 [2024-07-14 00:58:51.921273] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.786 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.786 [2024-07-14 00:58:51.987416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.786 [2024-07-14 00:58:52.081312] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.786 [2024-07-14 00:58:52.081373] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.786 [2024-07-14 00:58:52.081398] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.786 [2024-07-14 00:58:52.081411] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.786 [2024-07-14 00:58:52.081423] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.786 [2024-07-14 00:58:52.081503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.786 [2024-07-14 00:58:52.081556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.786 [2024-07-14 00:58:52.081619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.786 [2024-07-14 00:58:52.081621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.043 00:58:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:03.043 00:58:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:13:03.043 00:58:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:03.043 00:58:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:03.043 00:58:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.043 00:58:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.043 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:03.043 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6178 00:13:03.302 [2024-07-14 00:58:52.466582] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:03.302 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:03.302 { 00:13:03.302 "nqn": "nqn.2016-06.io.spdk:cnode6178", 00:13:03.302 "tgt_name": "foobar", 00:13:03.302 "method": "nvmf_create_subsystem", 00:13:03.302 "req_id": 1 00:13:03.302 } 00:13:03.302 Got JSON-RPC error response 00:13:03.302 response: 00:13:03.302 { 00:13:03.302 "code": -32603, 00:13:03.302 "message": "Unable to find target foobar" 00:13:03.302 }' 00:13:03.302 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:03.302 { 00:13:03.302 "nqn": "nqn.2016-06.io.spdk:cnode6178", 00:13:03.302 "tgt_name": "foobar", 00:13:03.302 "method": "nvmf_create_subsystem", 00:13:03.302 "req_id": 1 00:13:03.302 } 00:13:03.302 Got JSON-RPC error response 00:13:03.302 response: 00:13:03.302 { 00:13:03.302 "code": -32603, 00:13:03.302 "message": "Unable to find target foobar" 00:13:03.302 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:03.302 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:03.302 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2649 00:13:03.302 [2024-07-14 00:58:52.707426] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2649: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:03.560 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:03.560 { 00:13:03.560 "nqn": "nqn.2016-06.io.spdk:cnode2649", 00:13:03.560 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:03.560 "method": "nvmf_create_subsystem", 00:13:03.560 "req_id": 1 00:13:03.560 } 00:13:03.560 Got JSON-RPC error response 00:13:03.560 response: 00:13:03.560 { 00:13:03.560 "code": -32602, 00:13:03.560 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:03.560 }' 00:13:03.560 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:03.560 { 00:13:03.560 "nqn": "nqn.2016-06.io.spdk:cnode2649", 00:13:03.560 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:03.560 "method": "nvmf_create_subsystem", 00:13:03.560 "req_id": 1 00:13:03.560 } 00:13:03.560 Got JSON-RPC error response 00:13:03.560 response: 00:13:03.560 { 00:13:03.560 "code": -32602, 00:13:03.560 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:03.560 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:03.560 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:03.560 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24697 00:13:03.560 [2024-07-14 00:58:52.972308] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24697: invalid model number 'SPDK_Controller' 00:13:03.819 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:03.819 { 00:13:03.819 "nqn": "nqn.2016-06.io.spdk:cnode24697", 00:13:03.819 "model_number": "SPDK_Controller\u001f", 00:13:03.819 "method": "nvmf_create_subsystem", 00:13:03.819 "req_id": 1 00:13:03.819 } 00:13:03.819 Got JSON-RPC error response 00:13:03.819 response: 00:13:03.819 { 00:13:03.819 "code": -32602, 00:13:03.819 "message": "Invalid MN SPDK_Controller\u001f" 00:13:03.819 }' 00:13:03.819 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:03.819 { 00:13:03.819 "nqn": "nqn.2016-06.io.spdk:cnode24697", 00:13:03.819 "model_number": "SPDK_Controller\u001f", 00:13:03.819 "method": "nvmf_create_subsystem", 00:13:03.819 "req_id": 1 00:13:03.819 } 00:13:03.819 Got JSON-RPC error response 00:13:03.819 response: 00:13:03.819 { 00:13:03.819 "code": -32602, 00:13:03.819 "message": "Invalid MN SPDK_Controller\u001f" 00:13:03.819 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:03.819 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:03.819 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:03.820 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:03.820 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:03.820 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:03.820 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:03.820 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:03.820 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:03.820 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:03.820 00:58:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ # == \- ]] 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '#A,u}/>W+2HSvYvrbpDUM' 00:13:03.820 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '#A,u}/>W+2HSvYvrbpDUM' nqn.2016-06.io.spdk:cnode28624 00:13:04.078 [2024-07-14 00:58:53.289330] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28624: invalid serial number '#A,u}/>W+2HSvYvrbpDUM' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:04.078 { 00:13:04.078 "nqn": "nqn.2016-06.io.spdk:cnode28624", 00:13:04.078 "serial_number": "#A,u}/>W+2HSvYvrbpDUM", 00:13:04.078 "method": "nvmf_create_subsystem", 00:13:04.078 "req_id": 1 00:13:04.078 } 00:13:04.078 Got JSON-RPC error response 00:13:04.078 response: 00:13:04.078 { 00:13:04.078 "code": -32602, 00:13:04.078 "message": "Invalid SN #A,u}/>W+2HSvYvrbpDUM" 00:13:04.078 }' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:04.078 { 00:13:04.078 "nqn": "nqn.2016-06.io.spdk:cnode28624", 00:13:04.078 "serial_number": "#A,u}/>W+2HSvYvrbpDUM", 00:13:04.078 "method": "nvmf_create_subsystem", 00:13:04.078 "req_id": 1 00:13:04.078 } 00:13:04.078 Got JSON-RPC error response 00:13:04.078 response: 00:13:04.078 { 00:13:04.078 "code": -32602, 00:13:04.078 "message": "Invalid SN #A,u}/>W+2HSvYvrbpDUM" 00:13:04.078 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.078 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ + == \- ]] 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '+A0m[Z0%ghBIRG&cZ%T@GKlV!ctG M?pTpQ%8r-*w' 00:13:04.079 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '+A0m[Z0%ghBIRG&cZ%T@GKlV!ctG M?pTpQ%8r-*w' nqn.2016-06.io.spdk:cnode11042 00:13:04.337 [2024-07-14 00:58:53.694651] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11042: invalid model number '+A0m[Z0%ghBIRG&cZ%T@GKlV!ctG M?pTpQ%8r-*w' 00:13:04.337 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:04.337 { 00:13:04.337 "nqn": "nqn.2016-06.io.spdk:cnode11042", 00:13:04.337 "model_number": "+A0m[Z0%ghBIRG&cZ%T@GKlV!ctG M?pTpQ%8r-*w", 00:13:04.337 "method": "nvmf_create_subsystem", 00:13:04.337 "req_id": 1 00:13:04.337 } 00:13:04.337 Got JSON-RPC error response 00:13:04.337 response: 00:13:04.337 { 00:13:04.337 "code": -32602, 00:13:04.337 "message": "Invalid MN +A0m[Z0%ghBIRG&cZ%T@GKlV!ctG M?pTpQ%8r-*w" 00:13:04.337 }' 00:13:04.337 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:04.337 { 00:13:04.337 "nqn": "nqn.2016-06.io.spdk:cnode11042", 00:13:04.337 "model_number": "+A0m[Z0%ghBIRG&cZ%T@GKlV!ctG M?pTpQ%8r-*w", 00:13:04.337 "method": "nvmf_create_subsystem", 00:13:04.337 "req_id": 1 00:13:04.337 } 00:13:04.337 Got JSON-RPC error response 00:13:04.337 response: 00:13:04.337 { 00:13:04.337 "code": -32602, 00:13:04.337 "message": "Invalid MN +A0m[Z0%ghBIRG&cZ%T@GKlV!ctG M?pTpQ%8r-*w" 00:13:04.337 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:04.337 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:04.594 [2024-07-14 00:58:53.939544] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.594 00:58:53 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:04.852 00:58:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:04.852 00:58:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:04.852 00:58:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:04.852 00:58:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:04.852 00:58:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:05.110 [2024-07-14 00:58:54.449228] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:05.110 00:58:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:05.110 { 00:13:05.110 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:05.110 "listen_address": { 00:13:05.110 "trtype": "tcp", 00:13:05.110 "traddr": "", 00:13:05.110 "trsvcid": "4421" 00:13:05.110 }, 00:13:05.110 "method": "nvmf_subsystem_remove_listener", 00:13:05.110 "req_id": 1 00:13:05.110 } 00:13:05.110 Got JSON-RPC error response 00:13:05.110 response: 00:13:05.110 { 00:13:05.110 "code": -32602, 00:13:05.110 "message": "Invalid parameters" 00:13:05.110 }' 00:13:05.110 00:58:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:05.110 { 00:13:05.110 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:05.110 "listen_address": { 00:13:05.110 "trtype": "tcp", 00:13:05.110 "traddr": "", 00:13:05.110 "trsvcid": "4421" 00:13:05.110 }, 00:13:05.110 "method": "nvmf_subsystem_remove_listener", 00:13:05.110 "req_id": 1 00:13:05.110 } 00:13:05.110 Got JSON-RPC error response 00:13:05.110 response: 00:13:05.110 { 00:13:05.110 "code": -32602, 00:13:05.110 "message": "Invalid parameters" 00:13:05.110 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:05.110 00:58:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30344 -i 0 00:13:05.368 [2024-07-14 00:58:54.693988] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30344: invalid cntlid range [0-65519] 00:13:05.368 00:58:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:05.368 { 00:13:05.368 "nqn": "nqn.2016-06.io.spdk:cnode30344", 00:13:05.368 "min_cntlid": 0, 00:13:05.368 "method": "nvmf_create_subsystem", 00:13:05.368 "req_id": 1 00:13:05.368 } 00:13:05.368 Got JSON-RPC error response 00:13:05.368 response: 00:13:05.368 { 00:13:05.368 "code": -32602, 00:13:05.368 "message": "Invalid cntlid range [0-65519]" 00:13:05.368 }' 00:13:05.368 00:58:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:05.368 { 00:13:05.368 "nqn": "nqn.2016-06.io.spdk:cnode30344", 00:13:05.368 "min_cntlid": 0, 00:13:05.368 "method": "nvmf_create_subsystem", 00:13:05.368 "req_id": 1 00:13:05.368 } 00:13:05.368 Got JSON-RPC error response 00:13:05.368 response: 00:13:05.368 { 00:13:05.368 "code": -32602, 00:13:05.368 "message": "Invalid cntlid range [0-65519]" 00:13:05.368 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:05.368 00:58:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18748 -i 65520 00:13:05.626 [2024-07-14 00:58:54.942840] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18748: invalid cntlid range [65520-65519] 00:13:05.626 00:58:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:05.626 { 00:13:05.626 "nqn": "nqn.2016-06.io.spdk:cnode18748", 00:13:05.626 "min_cntlid": 65520, 00:13:05.626 "method": "nvmf_create_subsystem", 00:13:05.626 "req_id": 1 00:13:05.626 } 00:13:05.626 Got JSON-RPC error response 00:13:05.626 response: 00:13:05.626 { 00:13:05.626 "code": -32602, 00:13:05.626 "message": "Invalid cntlid range [65520-65519]" 00:13:05.626 }' 00:13:05.626 00:58:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:05.626 { 00:13:05.626 "nqn": "nqn.2016-06.io.spdk:cnode18748", 00:13:05.626 "min_cntlid": 65520, 00:13:05.626 "method": "nvmf_create_subsystem", 00:13:05.626 "req_id": 1 00:13:05.626 } 00:13:05.626 Got JSON-RPC error response 00:13:05.626 response: 00:13:05.626 { 00:13:05.626 "code": -32602, 00:13:05.626 "message": "Invalid cntlid range [65520-65519]" 00:13:05.626 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:05.626 00:58:54 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24504 -I 0 00:13:05.885 [2024-07-14 00:58:55.203696] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24504: invalid cntlid range [1-0] 00:13:05.885 00:58:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:05.885 { 00:13:05.885 "nqn": "nqn.2016-06.io.spdk:cnode24504", 00:13:05.885 "max_cntlid": 0, 00:13:05.885 "method": "nvmf_create_subsystem", 00:13:05.885 "req_id": 1 00:13:05.885 } 00:13:05.885 Got JSON-RPC error response 00:13:05.885 response: 00:13:05.885 { 00:13:05.885 "code": -32602, 00:13:05.885 "message": "Invalid cntlid range [1-0]" 00:13:05.885 }' 00:13:05.885 00:58:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:05.885 { 00:13:05.885 "nqn": "nqn.2016-06.io.spdk:cnode24504", 00:13:05.885 "max_cntlid": 0, 00:13:05.885 "method": "nvmf_create_subsystem", 00:13:05.885 "req_id": 1 00:13:05.885 } 00:13:05.885 Got JSON-RPC error response 00:13:05.885 response: 00:13:05.885 { 00:13:05.885 "code": -32602, 00:13:05.885 "message": "Invalid cntlid range [1-0]" 00:13:05.885 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:05.885 00:58:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1692 -I 65520 00:13:06.143 [2024-07-14 00:58:55.456494] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1692: invalid cntlid range [1-65520] 00:13:06.143 00:58:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:06.143 { 00:13:06.143 "nqn": "nqn.2016-06.io.spdk:cnode1692", 00:13:06.143 "max_cntlid": 65520, 00:13:06.143 "method": "nvmf_create_subsystem", 00:13:06.143 "req_id": 1 00:13:06.143 } 00:13:06.143 Got JSON-RPC error response 00:13:06.143 response: 00:13:06.143 { 00:13:06.143 "code": -32602, 00:13:06.143 "message": "Invalid cntlid range [1-65520]" 00:13:06.143 }' 00:13:06.143 00:58:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:06.143 { 00:13:06.143 "nqn": "nqn.2016-06.io.spdk:cnode1692", 00:13:06.143 "max_cntlid": 65520, 00:13:06.143 "method": "nvmf_create_subsystem", 00:13:06.143 "req_id": 1 00:13:06.143 } 00:13:06.143 Got JSON-RPC error response 00:13:06.143 response: 00:13:06.143 { 00:13:06.143 "code": -32602, 00:13:06.143 "message": "Invalid cntlid range [1-65520]" 00:13:06.143 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.143 00:58:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16790 -i 6 -I 5 00:13:06.401 [2024-07-14 00:58:55.705347] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16790: invalid cntlid range [6-5] 00:13:06.401 00:58:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:06.401 { 00:13:06.401 "nqn": "nqn.2016-06.io.spdk:cnode16790", 00:13:06.401 "min_cntlid": 6, 00:13:06.401 "max_cntlid": 5, 00:13:06.401 "method": "nvmf_create_subsystem", 00:13:06.401 "req_id": 1 00:13:06.401 } 00:13:06.401 Got JSON-RPC error response 00:13:06.401 response: 00:13:06.401 { 00:13:06.401 "code": -32602, 00:13:06.401 "message": "Invalid cntlid range [6-5]" 00:13:06.401 }' 00:13:06.401 00:58:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:06.401 { 00:13:06.401 "nqn": "nqn.2016-06.io.spdk:cnode16790", 00:13:06.401 "min_cntlid": 6, 00:13:06.401 "max_cntlid": 5, 00:13:06.401 "method": "nvmf_create_subsystem", 00:13:06.401 "req_id": 1 00:13:06.401 } 00:13:06.401 Got JSON-RPC error response 00:13:06.401 response: 00:13:06.401 { 00:13:06.401 "code": -32602, 00:13:06.401 "message": "Invalid cntlid range [6-5]" 00:13:06.401 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.401 00:58:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:06.662 { 00:13:06.662 "name": "foobar", 00:13:06.662 "method": "nvmf_delete_target", 00:13:06.662 "req_id": 1 00:13:06.662 } 00:13:06.662 Got JSON-RPC error response 00:13:06.662 response: 00:13:06.662 { 00:13:06.662 "code": -32602, 00:13:06.662 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:06.662 }' 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:06.662 { 00:13:06.662 "name": "foobar", 00:13:06.662 "method": "nvmf_delete_target", 00:13:06.662 "req_id": 1 00:13:06.662 } 00:13:06.662 Got JSON-RPC error response 00:13:06.662 response: 00:13:06.662 { 00:13:06.662 "code": -32602, 00:13:06.662 "message": "The specified target doesn't exist, cannot delete it." 00:13:06.662 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:06.662 rmmod nvme_tcp 00:13:06.662 rmmod nvme_fabrics 00:13:06.662 rmmod nvme_keyring 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1077919 ']' 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1077919 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1077919 ']' 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1077919 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1077919 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1077919' 00:13:06.662 killing process with pid 1077919 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1077919 00:13:06.662 00:58:55 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1077919 00:13:06.922 00:58:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:06.922 00:58:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:06.922 00:58:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:06.922 00:58:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:06.922 00:58:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:06.922 00:58:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.922 00:58:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.922 00:58:56 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.830 00:58:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:08.830 00:13:08.830 real 0m8.650s 00:13:08.830 user 0m19.838s 00:13:08.830 sys 0m2.491s 00:13:08.830 00:58:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:08.830 00:58:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:08.830 ************************************ 00:13:08.830 END TEST nvmf_invalid 00:13:08.830 ************************************ 00:13:08.830 00:58:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:08.830 00:58:58 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:08.830 00:58:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:08.830 00:58:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.830 00:58:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:09.089 ************************************ 00:13:09.089 START TEST nvmf_abort 00:13:09.089 ************************************ 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:09.089 * Looking for test storage... 00:13:09.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:09.089 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:09.090 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:09.090 00:58:58 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:09.090 00:58:58 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:09.090 00:58:58 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:09.090 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:09.090 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.090 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:09.090 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:09.090 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:09.090 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.090 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.090 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.090 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:09.090 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:09.090 00:58:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:09.090 00:58:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:10.995 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:10.995 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:10.995 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.995 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:10.996 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:10.996 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:11.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:13:11.255 00:13:11.255 --- 10.0.0.2 ping statistics --- 00:13:11.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.255 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:11.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:13:11.255 00:13:11.255 --- 10.0.0.1 ping statistics --- 00:13:11.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.255 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1080434 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1080434 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1080434 ']' 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:11.255 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.255 [2024-07-14 00:59:00.495780] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:11.255 [2024-07-14 00:59:00.495874] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.255 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.255 [2024-07-14 00:59:00.558537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:11.255 [2024-07-14 00:59:00.642509] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.255 [2024-07-14 00:59:00.642563] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.255 [2024-07-14 00:59:00.642588] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.255 [2024-07-14 00:59:00.642599] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.255 [2024-07-14 00:59:00.642609] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.255 [2024-07-14 00:59:00.642701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.255 [2024-07-14 00:59:00.642763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.255 [2024-07-14 00:59:00.642765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.516 [2024-07-14 00:59:00.783489] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.516 Malloc0 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.516 Delay0 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.516 [2024-07-14 00:59:00.849687] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.516 00:59:00 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:11.516 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.776 [2024-07-14 00:59:00.956309] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:13.709 Initializing NVMe Controllers 00:13:13.709 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:13.709 controller IO queue size 128 less than required 00:13:13.709 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:13.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:13.709 Initialization complete. Launching workers. 00:13:13.709 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32312 00:13:13.709 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32373, failed to submit 62 00:13:13.709 success 32316, unsuccess 57, failed 0 00:13:13.709 00:59:03 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:13.709 00:59:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.709 00:59:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:13.709 00:59:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.709 00:59:03 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:13.709 00:59:03 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:13.710 00:59:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:13.710 00:59:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:13.710 00:59:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:13.710 00:59:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:13.710 00:59:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:13.710 00:59:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:13.710 rmmod nvme_tcp 00:13:13.710 rmmod nvme_fabrics 00:13:13.710 rmmod nvme_keyring 00:13:13.972 00:59:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:13.972 00:59:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:13.972 00:59:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:13.972 00:59:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1080434 ']' 00:13:13.972 00:59:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1080434 00:13:13.972 00:59:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1080434 ']' 00:13:13.972 00:59:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1080434 00:13:13.972 00:59:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:13:13.972 00:59:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:13.972 00:59:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1080434 00:13:13.972 00:59:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:13.972 00:59:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:13.972 00:59:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1080434' 00:13:13.972 killing process with pid 1080434 00:13:13.972 00:59:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1080434 00:13:13.972 00:59:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1080434 00:13:14.232 00:59:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:14.232 00:59:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:14.232 00:59:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:14.232 00:59:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.232 00:59:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:14.232 00:59:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.232 00:59:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.232 00:59:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.136 00:59:05 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:16.136 00:13:16.136 real 0m7.191s 00:13:16.136 user 0m10.588s 00:13:16.136 sys 0m2.433s 00:13:16.136 00:59:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:16.136 00:59:05 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:16.136 ************************************ 00:13:16.136 END TEST nvmf_abort 00:13:16.136 ************************************ 00:13:16.136 00:59:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:16.136 00:59:05 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:16.136 00:59:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:16.136 00:59:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:16.136 00:59:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:16.136 ************************************ 00:13:16.136 START TEST nvmf_ns_hotplug_stress 00:13:16.136 ************************************ 00:13:16.136 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:16.136 * Looking for test storage... 00:13:16.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:16.396 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:16.397 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:16.397 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.397 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.397 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.397 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:16.397 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:16.397 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:16.397 00:59:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.298 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:18.298 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:18.298 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:18.298 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:18.298 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:18.298 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:18.298 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:18.298 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:18.298 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:18.298 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:18.298 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:18.298 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:18.298 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:18.298 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:18.298 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:18.298 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:18.299 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:18.299 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:18.299 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:18.299 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:18.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:13:18.299 00:13:18.299 --- 10.0.0.2 ping statistics --- 00:13:18.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.299 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:18.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:13:18.299 00:13:18.299 --- 10.0.0.1 ping statistics --- 00:13:18.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.299 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1082767 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1082767 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1082767 ']' 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.299 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.299 [2024-07-14 00:59:07.684151] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:18.299 [2024-07-14 00:59:07.684234] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.557 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.557 [2024-07-14 00:59:07.747751] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:18.557 [2024-07-14 00:59:07.836788] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.557 [2024-07-14 00:59:07.836850] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.557 [2024-07-14 00:59:07.836884] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.557 [2024-07-14 00:59:07.836899] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.557 [2024-07-14 00:59:07.836911] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.557 [2024-07-14 00:59:07.837034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.557 [2024-07-14 00:59:07.837120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.557 [2024-07-14 00:59:07.837123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.557 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:18.557 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:13:18.557 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:18.557 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:18.557 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.557 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.557 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:18.557 00:59:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:18.815 [2024-07-14 00:59:08.189284] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.815 00:59:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:19.384 00:59:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.384 [2024-07-14 00:59:08.780285] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.642 00:59:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:19.642 00:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:19.900 Malloc0 00:13:19.900 00:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:20.158 Delay0 00:13:20.158 00:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.416 00:59:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:20.674 NULL1 00:13:20.674 00:59:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:20.932 00:59:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1083073 00:13:20.932 00:59:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:20.932 00:59:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:20.932 00:59:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.191 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.130 Read completed with error (sct=0, sc=11) 00:13:22.130 00:59:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.130 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:22.130 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:22.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:22.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:22.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:22.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:22.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:22.388 00:59:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:22.388 00:59:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:22.652 true 00:13:22.652 00:59:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:22.652 00:59:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.591 00:59:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.849 00:59:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:23.849 00:59:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:24.106 true 00:13:24.106 00:59:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:24.106 00:59:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.363 00:59:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.622 00:59:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:24.622 00:59:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:24.622 true 00:13:24.881 00:59:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:24.881 00:59:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.881 00:59:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.140 00:59:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:25.140 00:59:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:25.398 true 00:13:25.398 00:59:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:25.398 00:59:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.774 00:59:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.775 00:59:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:26.775 00:59:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:27.032 true 00:13:27.032 00:59:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:27.032 00:59:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.969 00:59:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.228 00:59:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:28.228 00:59:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:28.527 true 00:13:28.528 00:59:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:28.528 00:59:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.528 00:59:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.787 00:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:28.787 00:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:29.046 true 00:13:29.046 00:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:29.046 00:59:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.982 00:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.241 00:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:30.241 00:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:30.499 true 00:13:30.499 00:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:30.499 00:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.758 00:59:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.016 00:59:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:31.016 00:59:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:31.274 true 00:13:31.274 00:59:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:31.274 00:59:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.532 00:59:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.790 00:59:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:31.791 00:59:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:32.049 true 00:13:32.049 00:59:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:32.049 00:59:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.984 00:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.242 00:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:33.242 00:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:33.500 true 00:13:33.500 00:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:33.500 00:59:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.758 00:59:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.026 00:59:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:34.026 00:59:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:34.287 true 00:13:34.287 00:59:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:34.287 00:59:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.545 00:59:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.803 00:59:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:34.803 00:59:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:35.061 true 00:13:35.061 00:59:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:35.061 00:59:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.997 00:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.255 00:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:36.255 00:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:36.513 true 00:13:36.513 00:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:36.513 00:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.771 00:59:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.029 00:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:37.029 00:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:37.287 true 00:13:37.287 00:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:37.287 00:59:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.853 00:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.111 00:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:38.111 00:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:38.369 true 00:13:38.369 00:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:38.369 00:59:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.627 00:59:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.885 00:59:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:38.885 00:59:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:39.143 true 00:13:39.143 00:59:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:39.143 00:59:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.078 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.078 00:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.078 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.335 00:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:40.335 00:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:40.593 true 00:13:40.593 00:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:40.593 00:59:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.850 00:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.107 00:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:41.107 00:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:41.365 true 00:13:41.365 00:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:41.365 00:59:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.347 00:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.605 00:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:42.605 00:59:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:42.862 true 00:13:42.862 00:59:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:42.862 00:59:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.120 00:59:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.378 00:59:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:43.378 00:59:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:43.635 true 00:13:43.635 00:59:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:43.635 00:59:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.893 00:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.150 00:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:44.150 00:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:44.406 true 00:13:44.406 00:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:44.406 00:59:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.338 00:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.338 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.596 00:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:45.596 00:59:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:45.853 true 00:13:45.853 00:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:45.853 00:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.110 00:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.368 00:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:46.368 00:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:46.626 true 00:13:46.626 00:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:46.626 00:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.558 00:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.816 00:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:47.816 00:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:48.074 true 00:13:48.074 00:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:48.074 00:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.332 00:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.590 00:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:48.590 00:59:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:48.848 true 00:13:48.848 00:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:48.848 00:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.784 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.784 00:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.043 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:50.043 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:50.043 true 00:13:50.302 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:50.302 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.302 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.868 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:50.868 00:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:50.868 true 00:13:50.868 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:50.868 00:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.804 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.804 Initializing NVMe Controllers 00:13:51.804 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:51.804 Controller IO queue size 128, less than required. 00:13:51.804 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:51.804 Controller IO queue size 128, less than required. 00:13:51.804 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:51.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:51.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:51.804 Initialization complete. Launching workers. 00:13:51.804 ======================================================== 00:13:51.804 Latency(us) 00:13:51.804 Device Information : IOPS MiB/s Average min max 00:13:51.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 722.46 0.35 91254.79 2787.32 1012733.44 00:13:51.804 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9629.83 4.70 13293.41 3371.95 450495.43 00:13:51.804 ======================================================== 00:13:51.804 Total : 10352.30 5.05 18734.17 2787.32 1012733.44 00:13:51.804 00:13:52.063 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:52.063 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:52.321 true 00:13:52.321 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1083073 00:13:52.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1083073) - No such process 00:13:52.321 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1083073 00:13:52.321 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.582 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.582 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:52.582 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:52.582 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:52.582 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:52.582 00:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:52.841 null0 00:13:53.101 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:53.101 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:53.101 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:53.101 null1 00:13:53.101 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:53.101 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:53.101 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:53.360 null2 00:13:53.360 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:53.360 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:53.360 00:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:53.619 null3 00:13:53.878 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:53.878 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:53.878 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:54.138 null4 00:13:54.138 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:54.138 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:54.138 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:54.138 null5 00:13:54.396 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:54.396 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:54.396 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:54.396 null6 00:13:54.396 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:54.396 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:54.396 00:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:54.654 null7 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:54.654 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.655 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:54.655 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:54.655 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:54.655 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:54.655 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:54.655 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:54.655 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1087114 1087115 1087117 1087119 1087121 1087123 1087125 1087127 00:13:54.655 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:54.655 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.655 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:54.913 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:54.913 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:54.913 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:55.200 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:55.200 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:55.200 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:55.200 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.200 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:55.200 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.200 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.200 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:55.200 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.200 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.200 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:55.458 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:55.717 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:55.717 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:55.717 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.717 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:55.718 00:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:55.718 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.718 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.718 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:55.718 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.718 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.718 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:55.976 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.976 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.976 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:55.976 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.976 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.976 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:55.976 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.976 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.976 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:55.976 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.976 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.976 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:55.976 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.976 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.976 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:55.976 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.976 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.976 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:56.234 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:56.234 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:56.234 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:56.234 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:56.234 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:56.234 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.234 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:56.234 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.493 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:56.752 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:56.752 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:56.752 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:56.752 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:56.752 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.752 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:56.752 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:56.752 00:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.011 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:57.269 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.270 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:57.270 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:57.270 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:57.270 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:57.270 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:57.270 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.270 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:57.528 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.528 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.528 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.528 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.528 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.528 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.528 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.528 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.528 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:57.528 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.528 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.528 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.529 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.529 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.529 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.529 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.529 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.529 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.529 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.529 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.529 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:57.529 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.529 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.529 00:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:57.787 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:57.787 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:57.787 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.787 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.787 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:57.787 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:57.787 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:57.787 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.045 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.045 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.045 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.045 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.045 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.046 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.304 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.304 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:58.304 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.304 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.304 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.304 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.304 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.304 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.563 00:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.821 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.821 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:58.821 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.821 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.821 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.821 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.821 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.821 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.080 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.080 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.080 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:59.080 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.080 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.080 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:59.080 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.080 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.080 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:59.080 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.080 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.080 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:59.080 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.080 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.080 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:59.080 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.080 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.081 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:59.081 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.081 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.081 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:59.081 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.081 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.081 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:59.340 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.340 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.340 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.340 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.340 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.340 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.340 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.340 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.598 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.598 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.598 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:59.598 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.598 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.598 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:59.598 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.598 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.598 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:59.598 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.598 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.598 00:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:59.598 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.598 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.598 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:59.598 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.598 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.598 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.598 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:59.598 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.598 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:59.857 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.857 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.857 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:59.857 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.857 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.857 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.857 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.857 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.857 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.115 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:00.115 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:00.115 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.115 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.115 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.115 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.115 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.115 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:00.374 rmmod nvme_tcp 00:14:00.374 rmmod nvme_fabrics 00:14:00.374 rmmod nvme_keyring 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1082767 ']' 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1082767 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1082767 ']' 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1082767 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1082767 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1082767' 00:14:00.374 killing process with pid 1082767 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1082767 00:14:00.374 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1082767 00:14:00.633 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:00.633 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:00.633 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:00.633 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:00.633 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:00.633 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.633 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.633 00:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.535 00:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:02.535 00:14:02.535 real 0m46.432s 00:14:02.535 user 3m31.016s 00:14:02.535 sys 0m16.440s 00:14:02.535 00:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:02.535 00:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.535 ************************************ 00:14:02.535 END TEST nvmf_ns_hotplug_stress 00:14:02.535 ************************************ 00:14:02.794 00:59:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:02.794 00:59:51 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:02.794 00:59:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:02.794 00:59:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:02.794 00:59:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:02.794 ************************************ 00:14:02.794 START TEST nvmf_connect_stress 00:14:02.794 ************************************ 00:14:02.794 00:59:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:02.794 * Looking for test storage... 00:14:02.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:02.794 00:59:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.697 00:59:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.697 00:59:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:04.697 00:59:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:04.697 00:59:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:04.697 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:04.697 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:04.697 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:04.698 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:04.698 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:04.698 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.956 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.956 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.956 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:04.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:14:04.956 00:14:04.956 --- 10.0.0.2 ping statistics --- 00:14:04.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.956 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:14:04.956 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:14:04.956 00:14:04.956 --- 10.0.0.1 ping statistics --- 00:14:04.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.956 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:14:04.956 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.956 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:04.956 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:04.956 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.956 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:04.956 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:04.956 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.957 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:04.957 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:04.957 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:04.957 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:04.957 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:04.957 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.957 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1089875 00:14:04.957 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1089875 00:14:04.957 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1089875 ']' 00:14:04.957 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.957 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:04.957 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.957 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.957 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.957 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.957 [2024-07-14 00:59:54.217409] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:04.957 [2024-07-14 00:59:54.217505] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.957 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.957 [2024-07-14 00:59:54.287954] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:05.215 [2024-07-14 00:59:54.378942] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.215 [2024-07-14 00:59:54.378997] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.215 [2024-07-14 00:59:54.379024] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.215 [2024-07-14 00:59:54.379037] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.215 [2024-07-14 00:59:54.379050] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.215 [2024-07-14 00:59:54.379148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.215 [2024-07-14 00:59:54.379261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.216 [2024-07-14 00:59:54.379264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.216 [2024-07-14 00:59:54.518754] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.216 [2024-07-14 00:59:54.546009] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.216 NULL1 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1090021 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.216 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.782 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.782 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:05.782 00:59:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.782 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.782 00:59:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.040 00:59:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.040 00:59:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:06.040 00:59:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.040 00:59:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.040 00:59:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.297 00:59:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.297 00:59:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:06.297 00:59:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.297 00:59:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.297 00:59:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.554 00:59:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.554 00:59:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:06.554 00:59:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.554 00:59:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.554 00:59:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.810 00:59:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.810 00:59:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:06.810 00:59:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.810 00:59:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.810 00:59:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.373 00:59:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.373 00:59:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:07.373 00:59:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.373 00:59:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.373 00:59:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.630 00:59:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.630 00:59:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:07.630 00:59:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.630 00:59:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.630 00:59:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.887 00:59:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.887 00:59:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:07.887 00:59:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.887 00:59:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.887 00:59:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.145 00:59:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.145 00:59:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:08.145 00:59:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.145 00:59:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.145 00:59:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.711 00:59:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.711 00:59:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:08.711 00:59:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.711 00:59:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.711 00:59:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.995 00:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.995 00:59:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:08.995 00:59:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.995 00:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.995 00:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.265 00:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.265 00:59:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:09.265 00:59:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.265 00:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.265 00:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.522 00:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.522 00:59:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:09.522 00:59:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.522 00:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.522 00:59:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.780 00:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.780 00:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:09.780 00:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.780 00:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.780 00:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.036 00:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.036 00:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:10.036 00:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.036 00:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.036 00:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.598 00:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.598 00:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:10.598 00:59:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.598 00:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.598 00:59:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.855 01:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.855 01:00:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:10.855 01:00:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.855 01:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.855 01:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.113 01:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.113 01:00:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:11.113 01:00:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.113 01:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.113 01:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.371 01:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.371 01:00:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:11.371 01:00:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.371 01:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.371 01:00:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.629 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.629 01:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:11.629 01:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.629 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.629 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.194 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.194 01:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:12.194 01:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.194 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.194 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.452 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.452 01:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:12.452 01:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.452 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.452 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.708 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.708 01:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:12.708 01:00:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.708 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.708 01:00:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.965 01:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.965 01:00:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:12.965 01:00:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.965 01:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.965 01:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.528 01:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.528 01:00:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:13.528 01:00:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.528 01:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.528 01:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.783 01:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.783 01:00:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:13.783 01:00:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.783 01:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.783 01:00:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.039 01:00:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.039 01:00:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:14.039 01:00:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.039 01:00:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.039 01:00:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.295 01:00:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.295 01:00:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:14.295 01:00:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.295 01:00:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.295 01:00:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.551 01:00:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.551 01:00:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:14.551 01:00:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.551 01:00:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.551 01:00:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.115 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.115 01:00:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:15.115 01:00:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.115 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.115 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.372 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.372 01:00:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:15.372 01:00:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.372 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.372 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.372 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1090021 00:14:15.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1090021) - No such process 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1090021 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:15.629 rmmod nvme_tcp 00:14:15.629 rmmod nvme_fabrics 00:14:15.629 rmmod nvme_keyring 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1089875 ']' 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1089875 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1089875 ']' 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1089875 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1089875 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1089875' 00:14:15.629 killing process with pid 1089875 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1089875 00:14:15.629 01:00:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1089875 00:14:15.890 01:00:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:15.890 01:00:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:15.890 01:00:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:15.890 01:00:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:15.890 01:00:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:15.890 01:00:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.890 01:00:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.890 01:00:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.417 01:00:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:18.417 00:14:18.417 real 0m15.278s 00:14:18.417 user 0m38.110s 00:14:18.417 sys 0m6.018s 00:14:18.417 01:00:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:18.417 01:00:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.417 ************************************ 00:14:18.417 END TEST nvmf_connect_stress 00:14:18.417 ************************************ 00:14:18.417 01:00:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:18.417 01:00:07 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:18.417 01:00:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:18.417 01:00:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:18.417 01:00:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:18.417 ************************************ 00:14:18.417 START TEST nvmf_fused_ordering 00:14:18.417 ************************************ 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:18.417 * Looking for test storage... 00:14:18.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:18.417 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:18.418 01:00:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:18.418 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:18.418 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.418 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:18.418 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:18.418 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:18.418 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.418 01:00:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.418 01:00:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.418 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:18.418 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:18.418 01:00:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:18.418 01:00:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.314 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:20.315 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:20.315 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:20.315 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:20.315 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:20.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:14:20.315 00:14:20.315 --- 10.0.0.2 ping statistics --- 00:14:20.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.315 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:20.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:14:20.315 00:14:20.315 --- 10.0.0.1 ping statistics --- 00:14:20.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.315 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1093788 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1093788 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1093788 ']' 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.315 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.315 [2024-07-14 01:00:09.473158] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:20.315 [2024-07-14 01:00:09.473265] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.315 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.315 [2024-07-14 01:00:09.536579] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.315 [2024-07-14 01:00:09.624646] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.315 [2024-07-14 01:00:09.624709] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.315 [2024-07-14 01:00:09.624722] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.315 [2024-07-14 01:00:09.624732] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.315 [2024-07-14 01:00:09.624741] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.315 [2024-07-14 01:00:09.624777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.573 [2024-07-14 01:00:09.770030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.573 [2024-07-14 01:00:09.786202] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.573 NULL1 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.573 01:00:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:20.573 [2024-07-14 01:00:09.832149] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:20.573 [2024-07-14 01:00:09.832194] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1093810 ] 00:14:20.573 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.137 Attached to nqn.2016-06.io.spdk:cnode1 00:14:21.137 Namespace ID: 1 size: 1GB 00:14:21.137 fused_ordering(0) 00:14:21.137 fused_ordering(1) 00:14:21.137 fused_ordering(2) 00:14:21.137 fused_ordering(3) 00:14:21.137 fused_ordering(4) 00:14:21.137 fused_ordering(5) 00:14:21.137 fused_ordering(6) 00:14:21.137 fused_ordering(7) 00:14:21.137 fused_ordering(8) 00:14:21.137 fused_ordering(9) 00:14:21.137 fused_ordering(10) 00:14:21.137 fused_ordering(11) 00:14:21.137 fused_ordering(12) 00:14:21.137 fused_ordering(13) 00:14:21.137 fused_ordering(14) 00:14:21.137 fused_ordering(15) 00:14:21.137 fused_ordering(16) 00:14:21.137 fused_ordering(17) 00:14:21.137 fused_ordering(18) 00:14:21.137 fused_ordering(19) 00:14:21.137 fused_ordering(20) 00:14:21.137 fused_ordering(21) 00:14:21.137 fused_ordering(22) 00:14:21.137 fused_ordering(23) 00:14:21.137 fused_ordering(24) 00:14:21.137 fused_ordering(25) 00:14:21.137 fused_ordering(26) 00:14:21.137 fused_ordering(27) 00:14:21.137 fused_ordering(28) 00:14:21.137 fused_ordering(29) 00:14:21.137 fused_ordering(30) 00:14:21.137 fused_ordering(31) 00:14:21.137 fused_ordering(32) 00:14:21.137 fused_ordering(33) 00:14:21.137 fused_ordering(34) 00:14:21.137 fused_ordering(35) 00:14:21.137 fused_ordering(36) 00:14:21.137 fused_ordering(37) 00:14:21.137 fused_ordering(38) 00:14:21.137 fused_ordering(39) 00:14:21.137 fused_ordering(40) 00:14:21.137 fused_ordering(41) 00:14:21.137 fused_ordering(42) 00:14:21.137 fused_ordering(43) 00:14:21.137 fused_ordering(44) 00:14:21.137 fused_ordering(45) 00:14:21.137 fused_ordering(46) 00:14:21.138 fused_ordering(47) 00:14:21.138 fused_ordering(48) 00:14:21.138 fused_ordering(49) 00:14:21.138 fused_ordering(50) 00:14:21.138 fused_ordering(51) 00:14:21.138 fused_ordering(52) 00:14:21.138 fused_ordering(53) 00:14:21.138 fused_ordering(54) 00:14:21.138 fused_ordering(55) 00:14:21.138 fused_ordering(56) 00:14:21.138 fused_ordering(57) 00:14:21.138 fused_ordering(58) 00:14:21.138 fused_ordering(59) 00:14:21.138 fused_ordering(60) 00:14:21.138 fused_ordering(61) 00:14:21.138 fused_ordering(62) 00:14:21.138 fused_ordering(63) 00:14:21.138 fused_ordering(64) 00:14:21.138 fused_ordering(65) 00:14:21.138 fused_ordering(66) 00:14:21.138 fused_ordering(67) 00:14:21.138 fused_ordering(68) 00:14:21.138 fused_ordering(69) 00:14:21.138 fused_ordering(70) 00:14:21.138 fused_ordering(71) 00:14:21.138 fused_ordering(72) 00:14:21.138 fused_ordering(73) 00:14:21.138 fused_ordering(74) 00:14:21.138 fused_ordering(75) 00:14:21.138 fused_ordering(76) 00:14:21.138 fused_ordering(77) 00:14:21.138 fused_ordering(78) 00:14:21.138 fused_ordering(79) 00:14:21.138 fused_ordering(80) 00:14:21.138 fused_ordering(81) 00:14:21.138 fused_ordering(82) 00:14:21.138 fused_ordering(83) 00:14:21.138 fused_ordering(84) 00:14:21.138 fused_ordering(85) 00:14:21.138 fused_ordering(86) 00:14:21.138 fused_ordering(87) 00:14:21.138 fused_ordering(88) 00:14:21.138 fused_ordering(89) 00:14:21.138 fused_ordering(90) 00:14:21.138 fused_ordering(91) 00:14:21.138 fused_ordering(92) 00:14:21.138 fused_ordering(93) 00:14:21.138 fused_ordering(94) 00:14:21.138 fused_ordering(95) 00:14:21.138 fused_ordering(96) 00:14:21.138 fused_ordering(97) 00:14:21.138 fused_ordering(98) 00:14:21.138 fused_ordering(99) 00:14:21.138 fused_ordering(100) 00:14:21.138 fused_ordering(101) 00:14:21.138 fused_ordering(102) 00:14:21.138 fused_ordering(103) 00:14:21.138 fused_ordering(104) 00:14:21.138 fused_ordering(105) 00:14:21.138 fused_ordering(106) 00:14:21.138 fused_ordering(107) 00:14:21.138 fused_ordering(108) 00:14:21.138 fused_ordering(109) 00:14:21.138 fused_ordering(110) 00:14:21.138 fused_ordering(111) 00:14:21.138 fused_ordering(112) 00:14:21.138 fused_ordering(113) 00:14:21.138 fused_ordering(114) 00:14:21.138 fused_ordering(115) 00:14:21.138 fused_ordering(116) 00:14:21.138 fused_ordering(117) 00:14:21.138 fused_ordering(118) 00:14:21.138 fused_ordering(119) 00:14:21.138 fused_ordering(120) 00:14:21.138 fused_ordering(121) 00:14:21.138 fused_ordering(122) 00:14:21.138 fused_ordering(123) 00:14:21.138 fused_ordering(124) 00:14:21.138 fused_ordering(125) 00:14:21.138 fused_ordering(126) 00:14:21.138 fused_ordering(127) 00:14:21.138 fused_ordering(128) 00:14:21.138 fused_ordering(129) 00:14:21.138 fused_ordering(130) 00:14:21.138 fused_ordering(131) 00:14:21.138 fused_ordering(132) 00:14:21.138 fused_ordering(133) 00:14:21.138 fused_ordering(134) 00:14:21.138 fused_ordering(135) 00:14:21.138 fused_ordering(136) 00:14:21.138 fused_ordering(137) 00:14:21.138 fused_ordering(138) 00:14:21.138 fused_ordering(139) 00:14:21.138 fused_ordering(140) 00:14:21.138 fused_ordering(141) 00:14:21.138 fused_ordering(142) 00:14:21.138 fused_ordering(143) 00:14:21.138 fused_ordering(144) 00:14:21.138 fused_ordering(145) 00:14:21.138 fused_ordering(146) 00:14:21.138 fused_ordering(147) 00:14:21.138 fused_ordering(148) 00:14:21.138 fused_ordering(149) 00:14:21.138 fused_ordering(150) 00:14:21.138 fused_ordering(151) 00:14:21.138 fused_ordering(152) 00:14:21.138 fused_ordering(153) 00:14:21.138 fused_ordering(154) 00:14:21.138 fused_ordering(155) 00:14:21.138 fused_ordering(156) 00:14:21.138 fused_ordering(157) 00:14:21.138 fused_ordering(158) 00:14:21.138 fused_ordering(159) 00:14:21.138 fused_ordering(160) 00:14:21.138 fused_ordering(161) 00:14:21.138 fused_ordering(162) 00:14:21.138 fused_ordering(163) 00:14:21.138 fused_ordering(164) 00:14:21.138 fused_ordering(165) 00:14:21.138 fused_ordering(166) 00:14:21.138 fused_ordering(167) 00:14:21.138 fused_ordering(168) 00:14:21.138 fused_ordering(169) 00:14:21.138 fused_ordering(170) 00:14:21.138 fused_ordering(171) 00:14:21.138 fused_ordering(172) 00:14:21.138 fused_ordering(173) 00:14:21.138 fused_ordering(174) 00:14:21.138 fused_ordering(175) 00:14:21.138 fused_ordering(176) 00:14:21.138 fused_ordering(177) 00:14:21.138 fused_ordering(178) 00:14:21.138 fused_ordering(179) 00:14:21.138 fused_ordering(180) 00:14:21.138 fused_ordering(181) 00:14:21.138 fused_ordering(182) 00:14:21.138 fused_ordering(183) 00:14:21.138 fused_ordering(184) 00:14:21.138 fused_ordering(185) 00:14:21.138 fused_ordering(186) 00:14:21.138 fused_ordering(187) 00:14:21.138 fused_ordering(188) 00:14:21.138 fused_ordering(189) 00:14:21.138 fused_ordering(190) 00:14:21.138 fused_ordering(191) 00:14:21.138 fused_ordering(192) 00:14:21.138 fused_ordering(193) 00:14:21.138 fused_ordering(194) 00:14:21.138 fused_ordering(195) 00:14:21.138 fused_ordering(196) 00:14:21.138 fused_ordering(197) 00:14:21.138 fused_ordering(198) 00:14:21.138 fused_ordering(199) 00:14:21.138 fused_ordering(200) 00:14:21.138 fused_ordering(201) 00:14:21.138 fused_ordering(202) 00:14:21.138 fused_ordering(203) 00:14:21.138 fused_ordering(204) 00:14:21.138 fused_ordering(205) 00:14:22.073 fused_ordering(206) 00:14:22.073 fused_ordering(207) 00:14:22.073 fused_ordering(208) 00:14:22.073 fused_ordering(209) 00:14:22.073 fused_ordering(210) 00:14:22.073 fused_ordering(211) 00:14:22.073 fused_ordering(212) 00:14:22.073 fused_ordering(213) 00:14:22.073 fused_ordering(214) 00:14:22.073 fused_ordering(215) 00:14:22.073 fused_ordering(216) 00:14:22.073 fused_ordering(217) 00:14:22.073 fused_ordering(218) 00:14:22.073 fused_ordering(219) 00:14:22.073 fused_ordering(220) 00:14:22.073 fused_ordering(221) 00:14:22.073 fused_ordering(222) 00:14:22.073 fused_ordering(223) 00:14:22.073 fused_ordering(224) 00:14:22.073 fused_ordering(225) 00:14:22.073 fused_ordering(226) 00:14:22.073 fused_ordering(227) 00:14:22.073 fused_ordering(228) 00:14:22.073 fused_ordering(229) 00:14:22.073 fused_ordering(230) 00:14:22.073 fused_ordering(231) 00:14:22.073 fused_ordering(232) 00:14:22.073 fused_ordering(233) 00:14:22.073 fused_ordering(234) 00:14:22.073 fused_ordering(235) 00:14:22.073 fused_ordering(236) 00:14:22.073 fused_ordering(237) 00:14:22.073 fused_ordering(238) 00:14:22.073 fused_ordering(239) 00:14:22.073 fused_ordering(240) 00:14:22.073 fused_ordering(241) 00:14:22.073 fused_ordering(242) 00:14:22.073 fused_ordering(243) 00:14:22.073 fused_ordering(244) 00:14:22.073 fused_ordering(245) 00:14:22.073 fused_ordering(246) 00:14:22.073 fused_ordering(247) 00:14:22.073 fused_ordering(248) 00:14:22.073 fused_ordering(249) 00:14:22.073 fused_ordering(250) 00:14:22.073 fused_ordering(251) 00:14:22.073 fused_ordering(252) 00:14:22.073 fused_ordering(253) 00:14:22.073 fused_ordering(254) 00:14:22.073 fused_ordering(255) 00:14:22.073 fused_ordering(256) 00:14:22.073 fused_ordering(257) 00:14:22.073 fused_ordering(258) 00:14:22.073 fused_ordering(259) 00:14:22.073 fused_ordering(260) 00:14:22.073 fused_ordering(261) 00:14:22.073 fused_ordering(262) 00:14:22.073 fused_ordering(263) 00:14:22.073 fused_ordering(264) 00:14:22.073 fused_ordering(265) 00:14:22.073 fused_ordering(266) 00:14:22.073 fused_ordering(267) 00:14:22.073 fused_ordering(268) 00:14:22.073 fused_ordering(269) 00:14:22.073 fused_ordering(270) 00:14:22.073 fused_ordering(271) 00:14:22.073 fused_ordering(272) 00:14:22.073 fused_ordering(273) 00:14:22.073 fused_ordering(274) 00:14:22.073 fused_ordering(275) 00:14:22.073 fused_ordering(276) 00:14:22.073 fused_ordering(277) 00:14:22.073 fused_ordering(278) 00:14:22.073 fused_ordering(279) 00:14:22.073 fused_ordering(280) 00:14:22.073 fused_ordering(281) 00:14:22.073 fused_ordering(282) 00:14:22.073 fused_ordering(283) 00:14:22.073 fused_ordering(284) 00:14:22.073 fused_ordering(285) 00:14:22.073 fused_ordering(286) 00:14:22.073 fused_ordering(287) 00:14:22.073 fused_ordering(288) 00:14:22.073 fused_ordering(289) 00:14:22.073 fused_ordering(290) 00:14:22.073 fused_ordering(291) 00:14:22.073 fused_ordering(292) 00:14:22.073 fused_ordering(293) 00:14:22.073 fused_ordering(294) 00:14:22.073 fused_ordering(295) 00:14:22.073 fused_ordering(296) 00:14:22.073 fused_ordering(297) 00:14:22.073 fused_ordering(298) 00:14:22.073 fused_ordering(299) 00:14:22.073 fused_ordering(300) 00:14:22.073 fused_ordering(301) 00:14:22.073 fused_ordering(302) 00:14:22.073 fused_ordering(303) 00:14:22.073 fused_ordering(304) 00:14:22.073 fused_ordering(305) 00:14:22.073 fused_ordering(306) 00:14:22.073 fused_ordering(307) 00:14:22.073 fused_ordering(308) 00:14:22.073 fused_ordering(309) 00:14:22.073 fused_ordering(310) 00:14:22.073 fused_ordering(311) 00:14:22.073 fused_ordering(312) 00:14:22.073 fused_ordering(313) 00:14:22.073 fused_ordering(314) 00:14:22.073 fused_ordering(315) 00:14:22.073 fused_ordering(316) 00:14:22.073 fused_ordering(317) 00:14:22.073 fused_ordering(318) 00:14:22.073 fused_ordering(319) 00:14:22.073 fused_ordering(320) 00:14:22.073 fused_ordering(321) 00:14:22.073 fused_ordering(322) 00:14:22.073 fused_ordering(323) 00:14:22.073 fused_ordering(324) 00:14:22.073 fused_ordering(325) 00:14:22.073 fused_ordering(326) 00:14:22.073 fused_ordering(327) 00:14:22.073 fused_ordering(328) 00:14:22.073 fused_ordering(329) 00:14:22.073 fused_ordering(330) 00:14:22.073 fused_ordering(331) 00:14:22.073 fused_ordering(332) 00:14:22.073 fused_ordering(333) 00:14:22.073 fused_ordering(334) 00:14:22.073 fused_ordering(335) 00:14:22.073 fused_ordering(336) 00:14:22.073 fused_ordering(337) 00:14:22.073 fused_ordering(338) 00:14:22.073 fused_ordering(339) 00:14:22.073 fused_ordering(340) 00:14:22.073 fused_ordering(341) 00:14:22.073 fused_ordering(342) 00:14:22.073 fused_ordering(343) 00:14:22.073 fused_ordering(344) 00:14:22.073 fused_ordering(345) 00:14:22.073 fused_ordering(346) 00:14:22.073 fused_ordering(347) 00:14:22.073 fused_ordering(348) 00:14:22.073 fused_ordering(349) 00:14:22.073 fused_ordering(350) 00:14:22.073 fused_ordering(351) 00:14:22.073 fused_ordering(352) 00:14:22.073 fused_ordering(353) 00:14:22.073 fused_ordering(354) 00:14:22.073 fused_ordering(355) 00:14:22.073 fused_ordering(356) 00:14:22.073 fused_ordering(357) 00:14:22.073 fused_ordering(358) 00:14:22.073 fused_ordering(359) 00:14:22.073 fused_ordering(360) 00:14:22.073 fused_ordering(361) 00:14:22.073 fused_ordering(362) 00:14:22.073 fused_ordering(363) 00:14:22.073 fused_ordering(364) 00:14:22.073 fused_ordering(365) 00:14:22.073 fused_ordering(366) 00:14:22.073 fused_ordering(367) 00:14:22.073 fused_ordering(368) 00:14:22.073 fused_ordering(369) 00:14:22.073 fused_ordering(370) 00:14:22.073 fused_ordering(371) 00:14:22.073 fused_ordering(372) 00:14:22.073 fused_ordering(373) 00:14:22.073 fused_ordering(374) 00:14:22.073 fused_ordering(375) 00:14:22.073 fused_ordering(376) 00:14:22.073 fused_ordering(377) 00:14:22.073 fused_ordering(378) 00:14:22.073 fused_ordering(379) 00:14:22.073 fused_ordering(380) 00:14:22.073 fused_ordering(381) 00:14:22.073 fused_ordering(382) 00:14:22.073 fused_ordering(383) 00:14:22.073 fused_ordering(384) 00:14:22.073 fused_ordering(385) 00:14:22.073 fused_ordering(386) 00:14:22.073 fused_ordering(387) 00:14:22.073 fused_ordering(388) 00:14:22.073 fused_ordering(389) 00:14:22.073 fused_ordering(390) 00:14:22.073 fused_ordering(391) 00:14:22.073 fused_ordering(392) 00:14:22.073 fused_ordering(393) 00:14:22.073 fused_ordering(394) 00:14:22.073 fused_ordering(395) 00:14:22.073 fused_ordering(396) 00:14:22.073 fused_ordering(397) 00:14:22.073 fused_ordering(398) 00:14:22.073 fused_ordering(399) 00:14:22.073 fused_ordering(400) 00:14:22.073 fused_ordering(401) 00:14:22.073 fused_ordering(402) 00:14:22.073 fused_ordering(403) 00:14:22.073 fused_ordering(404) 00:14:22.073 fused_ordering(405) 00:14:22.073 fused_ordering(406) 00:14:22.073 fused_ordering(407) 00:14:22.073 fused_ordering(408) 00:14:22.073 fused_ordering(409) 00:14:22.073 fused_ordering(410) 00:14:22.649 fused_ordering(411) 00:14:22.649 fused_ordering(412) 00:14:22.649 fused_ordering(413) 00:14:22.649 fused_ordering(414) 00:14:22.649 fused_ordering(415) 00:14:22.649 fused_ordering(416) 00:14:22.649 fused_ordering(417) 00:14:22.649 fused_ordering(418) 00:14:22.649 fused_ordering(419) 00:14:22.649 fused_ordering(420) 00:14:22.649 fused_ordering(421) 00:14:22.649 fused_ordering(422) 00:14:22.649 fused_ordering(423) 00:14:22.649 fused_ordering(424) 00:14:22.649 fused_ordering(425) 00:14:22.649 fused_ordering(426) 00:14:22.649 fused_ordering(427) 00:14:22.649 fused_ordering(428) 00:14:22.649 fused_ordering(429) 00:14:22.649 fused_ordering(430) 00:14:22.649 fused_ordering(431) 00:14:22.649 fused_ordering(432) 00:14:22.649 fused_ordering(433) 00:14:22.649 fused_ordering(434) 00:14:22.649 fused_ordering(435) 00:14:22.649 fused_ordering(436) 00:14:22.649 fused_ordering(437) 00:14:22.649 fused_ordering(438) 00:14:22.649 fused_ordering(439) 00:14:22.649 fused_ordering(440) 00:14:22.649 fused_ordering(441) 00:14:22.649 fused_ordering(442) 00:14:22.649 fused_ordering(443) 00:14:22.649 fused_ordering(444) 00:14:22.649 fused_ordering(445) 00:14:22.649 fused_ordering(446) 00:14:22.649 fused_ordering(447) 00:14:22.649 fused_ordering(448) 00:14:22.649 fused_ordering(449) 00:14:22.649 fused_ordering(450) 00:14:22.649 fused_ordering(451) 00:14:22.649 fused_ordering(452) 00:14:22.649 fused_ordering(453) 00:14:22.649 fused_ordering(454) 00:14:22.649 fused_ordering(455) 00:14:22.649 fused_ordering(456) 00:14:22.649 fused_ordering(457) 00:14:22.649 fused_ordering(458) 00:14:22.649 fused_ordering(459) 00:14:22.649 fused_ordering(460) 00:14:22.649 fused_ordering(461) 00:14:22.649 fused_ordering(462) 00:14:22.649 fused_ordering(463) 00:14:22.649 fused_ordering(464) 00:14:22.649 fused_ordering(465) 00:14:22.649 fused_ordering(466) 00:14:22.649 fused_ordering(467) 00:14:22.649 fused_ordering(468) 00:14:22.649 fused_ordering(469) 00:14:22.649 fused_ordering(470) 00:14:22.649 fused_ordering(471) 00:14:22.649 fused_ordering(472) 00:14:22.649 fused_ordering(473) 00:14:22.649 fused_ordering(474) 00:14:22.649 fused_ordering(475) 00:14:22.649 fused_ordering(476) 00:14:22.649 fused_ordering(477) 00:14:22.649 fused_ordering(478) 00:14:22.649 fused_ordering(479) 00:14:22.649 fused_ordering(480) 00:14:22.649 fused_ordering(481) 00:14:22.649 fused_ordering(482) 00:14:22.649 fused_ordering(483) 00:14:22.649 fused_ordering(484) 00:14:22.649 fused_ordering(485) 00:14:22.649 fused_ordering(486) 00:14:22.649 fused_ordering(487) 00:14:22.649 fused_ordering(488) 00:14:22.649 fused_ordering(489) 00:14:22.649 fused_ordering(490) 00:14:22.649 fused_ordering(491) 00:14:22.649 fused_ordering(492) 00:14:22.649 fused_ordering(493) 00:14:22.649 fused_ordering(494) 00:14:22.649 fused_ordering(495) 00:14:22.649 fused_ordering(496) 00:14:22.649 fused_ordering(497) 00:14:22.649 fused_ordering(498) 00:14:22.649 fused_ordering(499) 00:14:22.649 fused_ordering(500) 00:14:22.649 fused_ordering(501) 00:14:22.649 fused_ordering(502) 00:14:22.649 fused_ordering(503) 00:14:22.649 fused_ordering(504) 00:14:22.649 fused_ordering(505) 00:14:22.649 fused_ordering(506) 00:14:22.649 fused_ordering(507) 00:14:22.649 fused_ordering(508) 00:14:22.649 fused_ordering(509) 00:14:22.649 fused_ordering(510) 00:14:22.649 fused_ordering(511) 00:14:22.649 fused_ordering(512) 00:14:22.649 fused_ordering(513) 00:14:22.649 fused_ordering(514) 00:14:22.649 fused_ordering(515) 00:14:22.649 fused_ordering(516) 00:14:22.649 fused_ordering(517) 00:14:22.649 fused_ordering(518) 00:14:22.649 fused_ordering(519) 00:14:22.649 fused_ordering(520) 00:14:22.649 fused_ordering(521) 00:14:22.649 fused_ordering(522) 00:14:22.649 fused_ordering(523) 00:14:22.649 fused_ordering(524) 00:14:22.649 fused_ordering(525) 00:14:22.649 fused_ordering(526) 00:14:22.649 fused_ordering(527) 00:14:22.649 fused_ordering(528) 00:14:22.649 fused_ordering(529) 00:14:22.649 fused_ordering(530) 00:14:22.649 fused_ordering(531) 00:14:22.649 fused_ordering(532) 00:14:22.649 fused_ordering(533) 00:14:22.649 fused_ordering(534) 00:14:22.649 fused_ordering(535) 00:14:22.649 fused_ordering(536) 00:14:22.649 fused_ordering(537) 00:14:22.649 fused_ordering(538) 00:14:22.649 fused_ordering(539) 00:14:22.649 fused_ordering(540) 00:14:22.649 fused_ordering(541) 00:14:22.650 fused_ordering(542) 00:14:22.650 fused_ordering(543) 00:14:22.650 fused_ordering(544) 00:14:22.650 fused_ordering(545) 00:14:22.650 fused_ordering(546) 00:14:22.650 fused_ordering(547) 00:14:22.650 fused_ordering(548) 00:14:22.650 fused_ordering(549) 00:14:22.650 fused_ordering(550) 00:14:22.650 fused_ordering(551) 00:14:22.650 fused_ordering(552) 00:14:22.650 fused_ordering(553) 00:14:22.650 fused_ordering(554) 00:14:22.650 fused_ordering(555) 00:14:22.650 fused_ordering(556) 00:14:22.650 fused_ordering(557) 00:14:22.650 fused_ordering(558) 00:14:22.650 fused_ordering(559) 00:14:22.650 fused_ordering(560) 00:14:22.650 fused_ordering(561) 00:14:22.650 fused_ordering(562) 00:14:22.650 fused_ordering(563) 00:14:22.650 fused_ordering(564) 00:14:22.650 fused_ordering(565) 00:14:22.650 fused_ordering(566) 00:14:22.650 fused_ordering(567) 00:14:22.650 fused_ordering(568) 00:14:22.650 fused_ordering(569) 00:14:22.650 fused_ordering(570) 00:14:22.650 fused_ordering(571) 00:14:22.650 fused_ordering(572) 00:14:22.650 fused_ordering(573) 00:14:22.650 fused_ordering(574) 00:14:22.650 fused_ordering(575) 00:14:22.650 fused_ordering(576) 00:14:22.650 fused_ordering(577) 00:14:22.650 fused_ordering(578) 00:14:22.650 fused_ordering(579) 00:14:22.650 fused_ordering(580) 00:14:22.650 fused_ordering(581) 00:14:22.650 fused_ordering(582) 00:14:22.650 fused_ordering(583) 00:14:22.650 fused_ordering(584) 00:14:22.650 fused_ordering(585) 00:14:22.650 fused_ordering(586) 00:14:22.650 fused_ordering(587) 00:14:22.650 fused_ordering(588) 00:14:22.650 fused_ordering(589) 00:14:22.650 fused_ordering(590) 00:14:22.650 fused_ordering(591) 00:14:22.650 fused_ordering(592) 00:14:22.650 fused_ordering(593) 00:14:22.650 fused_ordering(594) 00:14:22.650 fused_ordering(595) 00:14:22.650 fused_ordering(596) 00:14:22.650 fused_ordering(597) 00:14:22.650 fused_ordering(598) 00:14:22.650 fused_ordering(599) 00:14:22.650 fused_ordering(600) 00:14:22.650 fused_ordering(601) 00:14:22.650 fused_ordering(602) 00:14:22.650 fused_ordering(603) 00:14:22.650 fused_ordering(604) 00:14:22.650 fused_ordering(605) 00:14:22.650 fused_ordering(606) 00:14:22.650 fused_ordering(607) 00:14:22.650 fused_ordering(608) 00:14:22.650 fused_ordering(609) 00:14:22.650 fused_ordering(610) 00:14:22.650 fused_ordering(611) 00:14:22.650 fused_ordering(612) 00:14:22.650 fused_ordering(613) 00:14:22.650 fused_ordering(614) 00:14:22.650 fused_ordering(615) 00:14:23.249 fused_ordering(616) 00:14:23.249 fused_ordering(617) 00:14:23.249 fused_ordering(618) 00:14:23.249 fused_ordering(619) 00:14:23.249 fused_ordering(620) 00:14:23.249 fused_ordering(621) 00:14:23.249 fused_ordering(622) 00:14:23.249 fused_ordering(623) 00:14:23.249 fused_ordering(624) 00:14:23.249 fused_ordering(625) 00:14:23.249 fused_ordering(626) 00:14:23.249 fused_ordering(627) 00:14:23.249 fused_ordering(628) 00:14:23.249 fused_ordering(629) 00:14:23.249 fused_ordering(630) 00:14:23.249 fused_ordering(631) 00:14:23.249 fused_ordering(632) 00:14:23.249 fused_ordering(633) 00:14:23.249 fused_ordering(634) 00:14:23.249 fused_ordering(635) 00:14:23.249 fused_ordering(636) 00:14:23.249 fused_ordering(637) 00:14:23.249 fused_ordering(638) 00:14:23.249 fused_ordering(639) 00:14:23.249 fused_ordering(640) 00:14:23.249 fused_ordering(641) 00:14:23.249 fused_ordering(642) 00:14:23.249 fused_ordering(643) 00:14:23.249 fused_ordering(644) 00:14:23.249 fused_ordering(645) 00:14:23.249 fused_ordering(646) 00:14:23.249 fused_ordering(647) 00:14:23.249 fused_ordering(648) 00:14:23.249 fused_ordering(649) 00:14:23.249 fused_ordering(650) 00:14:23.249 fused_ordering(651) 00:14:23.249 fused_ordering(652) 00:14:23.249 fused_ordering(653) 00:14:23.249 fused_ordering(654) 00:14:23.249 fused_ordering(655) 00:14:23.249 fused_ordering(656) 00:14:23.249 fused_ordering(657) 00:14:23.249 fused_ordering(658) 00:14:23.249 fused_ordering(659) 00:14:23.249 fused_ordering(660) 00:14:23.249 fused_ordering(661) 00:14:23.249 fused_ordering(662) 00:14:23.249 fused_ordering(663) 00:14:23.249 fused_ordering(664) 00:14:23.249 fused_ordering(665) 00:14:23.249 fused_ordering(666) 00:14:23.249 fused_ordering(667) 00:14:23.249 fused_ordering(668) 00:14:23.249 fused_ordering(669) 00:14:23.249 fused_ordering(670) 00:14:23.249 fused_ordering(671) 00:14:23.249 fused_ordering(672) 00:14:23.249 fused_ordering(673) 00:14:23.249 fused_ordering(674) 00:14:23.249 fused_ordering(675) 00:14:23.249 fused_ordering(676) 00:14:23.249 fused_ordering(677) 00:14:23.249 fused_ordering(678) 00:14:23.249 fused_ordering(679) 00:14:23.249 fused_ordering(680) 00:14:23.249 fused_ordering(681) 00:14:23.249 fused_ordering(682) 00:14:23.249 fused_ordering(683) 00:14:23.249 fused_ordering(684) 00:14:23.249 fused_ordering(685) 00:14:23.249 fused_ordering(686) 00:14:23.249 fused_ordering(687) 00:14:23.249 fused_ordering(688) 00:14:23.249 fused_ordering(689) 00:14:23.249 fused_ordering(690) 00:14:23.249 fused_ordering(691) 00:14:23.249 fused_ordering(692) 00:14:23.249 fused_ordering(693) 00:14:23.249 fused_ordering(694) 00:14:23.249 fused_ordering(695) 00:14:23.249 fused_ordering(696) 00:14:23.249 fused_ordering(697) 00:14:23.249 fused_ordering(698) 00:14:23.249 fused_ordering(699) 00:14:23.249 fused_ordering(700) 00:14:23.249 fused_ordering(701) 00:14:23.249 fused_ordering(702) 00:14:23.249 fused_ordering(703) 00:14:23.249 fused_ordering(704) 00:14:23.249 fused_ordering(705) 00:14:23.249 fused_ordering(706) 00:14:23.249 fused_ordering(707) 00:14:23.249 fused_ordering(708) 00:14:23.249 fused_ordering(709) 00:14:23.249 fused_ordering(710) 00:14:23.249 fused_ordering(711) 00:14:23.249 fused_ordering(712) 00:14:23.249 fused_ordering(713) 00:14:23.249 fused_ordering(714) 00:14:23.249 fused_ordering(715) 00:14:23.249 fused_ordering(716) 00:14:23.249 fused_ordering(717) 00:14:23.249 fused_ordering(718) 00:14:23.249 fused_ordering(719) 00:14:23.249 fused_ordering(720) 00:14:23.249 fused_ordering(721) 00:14:23.249 fused_ordering(722) 00:14:23.249 fused_ordering(723) 00:14:23.249 fused_ordering(724) 00:14:23.249 fused_ordering(725) 00:14:23.249 fused_ordering(726) 00:14:23.249 fused_ordering(727) 00:14:23.249 fused_ordering(728) 00:14:23.249 fused_ordering(729) 00:14:23.249 fused_ordering(730) 00:14:23.249 fused_ordering(731) 00:14:23.249 fused_ordering(732) 00:14:23.249 fused_ordering(733) 00:14:23.249 fused_ordering(734) 00:14:23.249 fused_ordering(735) 00:14:23.249 fused_ordering(736) 00:14:23.249 fused_ordering(737) 00:14:23.249 fused_ordering(738) 00:14:23.249 fused_ordering(739) 00:14:23.249 fused_ordering(740) 00:14:23.249 fused_ordering(741) 00:14:23.249 fused_ordering(742) 00:14:23.249 fused_ordering(743) 00:14:23.249 fused_ordering(744) 00:14:23.249 fused_ordering(745) 00:14:23.249 fused_ordering(746) 00:14:23.249 fused_ordering(747) 00:14:23.249 fused_ordering(748) 00:14:23.249 fused_ordering(749) 00:14:23.249 fused_ordering(750) 00:14:23.249 fused_ordering(751) 00:14:23.249 fused_ordering(752) 00:14:23.249 fused_ordering(753) 00:14:23.249 fused_ordering(754) 00:14:23.249 fused_ordering(755) 00:14:23.249 fused_ordering(756) 00:14:23.249 fused_ordering(757) 00:14:23.249 fused_ordering(758) 00:14:23.249 fused_ordering(759) 00:14:23.249 fused_ordering(760) 00:14:23.249 fused_ordering(761) 00:14:23.249 fused_ordering(762) 00:14:23.249 fused_ordering(763) 00:14:23.249 fused_ordering(764) 00:14:23.249 fused_ordering(765) 00:14:23.249 fused_ordering(766) 00:14:23.249 fused_ordering(767) 00:14:23.249 fused_ordering(768) 00:14:23.249 fused_ordering(769) 00:14:23.249 fused_ordering(770) 00:14:23.249 fused_ordering(771) 00:14:23.249 fused_ordering(772) 00:14:23.249 fused_ordering(773) 00:14:23.249 fused_ordering(774) 00:14:23.249 fused_ordering(775) 00:14:23.249 fused_ordering(776) 00:14:23.249 fused_ordering(777) 00:14:23.249 fused_ordering(778) 00:14:23.249 fused_ordering(779) 00:14:23.249 fused_ordering(780) 00:14:23.249 fused_ordering(781) 00:14:23.249 fused_ordering(782) 00:14:23.249 fused_ordering(783) 00:14:23.249 fused_ordering(784) 00:14:23.249 fused_ordering(785) 00:14:23.249 fused_ordering(786) 00:14:23.249 fused_ordering(787) 00:14:23.249 fused_ordering(788) 00:14:23.249 fused_ordering(789) 00:14:23.249 fused_ordering(790) 00:14:23.249 fused_ordering(791) 00:14:23.249 fused_ordering(792) 00:14:23.249 fused_ordering(793) 00:14:23.249 fused_ordering(794) 00:14:23.249 fused_ordering(795) 00:14:23.249 fused_ordering(796) 00:14:23.249 fused_ordering(797) 00:14:23.249 fused_ordering(798) 00:14:23.249 fused_ordering(799) 00:14:23.249 fused_ordering(800) 00:14:23.249 fused_ordering(801) 00:14:23.249 fused_ordering(802) 00:14:23.249 fused_ordering(803) 00:14:23.249 fused_ordering(804) 00:14:23.249 fused_ordering(805) 00:14:23.249 fused_ordering(806) 00:14:23.249 fused_ordering(807) 00:14:23.249 fused_ordering(808) 00:14:23.249 fused_ordering(809) 00:14:23.249 fused_ordering(810) 00:14:23.249 fused_ordering(811) 00:14:23.249 fused_ordering(812) 00:14:23.249 fused_ordering(813) 00:14:23.249 fused_ordering(814) 00:14:23.249 fused_ordering(815) 00:14:23.249 fused_ordering(816) 00:14:23.249 fused_ordering(817) 00:14:23.249 fused_ordering(818) 00:14:23.249 fused_ordering(819) 00:14:23.249 fused_ordering(820) 00:14:24.182 fused_ordering(821) 00:14:24.182 fused_ordering(822) 00:14:24.182 fused_ordering(823) 00:14:24.182 fused_ordering(824) 00:14:24.182 fused_ordering(825) 00:14:24.182 fused_ordering(826) 00:14:24.182 fused_ordering(827) 00:14:24.182 fused_ordering(828) 00:14:24.182 fused_ordering(829) 00:14:24.182 fused_ordering(830) 00:14:24.182 fused_ordering(831) 00:14:24.182 fused_ordering(832) 00:14:24.182 fused_ordering(833) 00:14:24.182 fused_ordering(834) 00:14:24.182 fused_ordering(835) 00:14:24.182 fused_ordering(836) 00:14:24.182 fused_ordering(837) 00:14:24.182 fused_ordering(838) 00:14:24.182 fused_ordering(839) 00:14:24.182 fused_ordering(840) 00:14:24.182 fused_ordering(841) 00:14:24.182 fused_ordering(842) 00:14:24.182 fused_ordering(843) 00:14:24.182 fused_ordering(844) 00:14:24.182 fused_ordering(845) 00:14:24.182 fused_ordering(846) 00:14:24.182 fused_ordering(847) 00:14:24.182 fused_ordering(848) 00:14:24.182 fused_ordering(849) 00:14:24.182 fused_ordering(850) 00:14:24.182 fused_ordering(851) 00:14:24.182 fused_ordering(852) 00:14:24.182 fused_ordering(853) 00:14:24.182 fused_ordering(854) 00:14:24.182 fused_ordering(855) 00:14:24.182 fused_ordering(856) 00:14:24.182 fused_ordering(857) 00:14:24.182 fused_ordering(858) 00:14:24.182 fused_ordering(859) 00:14:24.182 fused_ordering(860) 00:14:24.182 fused_ordering(861) 00:14:24.182 fused_ordering(862) 00:14:24.182 fused_ordering(863) 00:14:24.182 fused_ordering(864) 00:14:24.182 fused_ordering(865) 00:14:24.182 fused_ordering(866) 00:14:24.182 fused_ordering(867) 00:14:24.182 fused_ordering(868) 00:14:24.182 fused_ordering(869) 00:14:24.182 fused_ordering(870) 00:14:24.182 fused_ordering(871) 00:14:24.182 fused_ordering(872) 00:14:24.182 fused_ordering(873) 00:14:24.182 fused_ordering(874) 00:14:24.182 fused_ordering(875) 00:14:24.182 fused_ordering(876) 00:14:24.182 fused_ordering(877) 00:14:24.182 fused_ordering(878) 00:14:24.182 fused_ordering(879) 00:14:24.182 fused_ordering(880) 00:14:24.182 fused_ordering(881) 00:14:24.182 fused_ordering(882) 00:14:24.182 fused_ordering(883) 00:14:24.182 fused_ordering(884) 00:14:24.182 fused_ordering(885) 00:14:24.182 fused_ordering(886) 00:14:24.182 fused_ordering(887) 00:14:24.182 fused_ordering(888) 00:14:24.182 fused_ordering(889) 00:14:24.182 fused_ordering(890) 00:14:24.182 fused_ordering(891) 00:14:24.182 fused_ordering(892) 00:14:24.182 fused_ordering(893) 00:14:24.182 fused_ordering(894) 00:14:24.182 fused_ordering(895) 00:14:24.182 fused_ordering(896) 00:14:24.182 fused_ordering(897) 00:14:24.182 fused_ordering(898) 00:14:24.182 fused_ordering(899) 00:14:24.182 fused_ordering(900) 00:14:24.182 fused_ordering(901) 00:14:24.182 fused_ordering(902) 00:14:24.182 fused_ordering(903) 00:14:24.182 fused_ordering(904) 00:14:24.182 fused_ordering(905) 00:14:24.182 fused_ordering(906) 00:14:24.182 fused_ordering(907) 00:14:24.182 fused_ordering(908) 00:14:24.182 fused_ordering(909) 00:14:24.182 fused_ordering(910) 00:14:24.182 fused_ordering(911) 00:14:24.182 fused_ordering(912) 00:14:24.182 fused_ordering(913) 00:14:24.182 fused_ordering(914) 00:14:24.182 fused_ordering(915) 00:14:24.182 fused_ordering(916) 00:14:24.182 fused_ordering(917) 00:14:24.182 fused_ordering(918) 00:14:24.182 fused_ordering(919) 00:14:24.182 fused_ordering(920) 00:14:24.182 fused_ordering(921) 00:14:24.182 fused_ordering(922) 00:14:24.182 fused_ordering(923) 00:14:24.182 fused_ordering(924) 00:14:24.182 fused_ordering(925) 00:14:24.182 fused_ordering(926) 00:14:24.182 fused_ordering(927) 00:14:24.182 fused_ordering(928) 00:14:24.182 fused_ordering(929) 00:14:24.182 fused_ordering(930) 00:14:24.182 fused_ordering(931) 00:14:24.182 fused_ordering(932) 00:14:24.182 fused_ordering(933) 00:14:24.182 fused_ordering(934) 00:14:24.182 fused_ordering(935) 00:14:24.182 fused_ordering(936) 00:14:24.182 fused_ordering(937) 00:14:24.182 fused_ordering(938) 00:14:24.182 fused_ordering(939) 00:14:24.182 fused_ordering(940) 00:14:24.182 fused_ordering(941) 00:14:24.182 fused_ordering(942) 00:14:24.182 fused_ordering(943) 00:14:24.182 fused_ordering(944) 00:14:24.182 fused_ordering(945) 00:14:24.182 fused_ordering(946) 00:14:24.182 fused_ordering(947) 00:14:24.182 fused_ordering(948) 00:14:24.182 fused_ordering(949) 00:14:24.182 fused_ordering(950) 00:14:24.182 fused_ordering(951) 00:14:24.182 fused_ordering(952) 00:14:24.182 fused_ordering(953) 00:14:24.182 fused_ordering(954) 00:14:24.182 fused_ordering(955) 00:14:24.182 fused_ordering(956) 00:14:24.182 fused_ordering(957) 00:14:24.182 fused_ordering(958) 00:14:24.182 fused_ordering(959) 00:14:24.182 fused_ordering(960) 00:14:24.182 fused_ordering(961) 00:14:24.182 fused_ordering(962) 00:14:24.182 fused_ordering(963) 00:14:24.182 fused_ordering(964) 00:14:24.182 fused_ordering(965) 00:14:24.182 fused_ordering(966) 00:14:24.182 fused_ordering(967) 00:14:24.182 fused_ordering(968) 00:14:24.182 fused_ordering(969) 00:14:24.182 fused_ordering(970) 00:14:24.182 fused_ordering(971) 00:14:24.182 fused_ordering(972) 00:14:24.182 fused_ordering(973) 00:14:24.182 fused_ordering(974) 00:14:24.182 fused_ordering(975) 00:14:24.182 fused_ordering(976) 00:14:24.182 fused_ordering(977) 00:14:24.182 fused_ordering(978) 00:14:24.182 fused_ordering(979) 00:14:24.182 fused_ordering(980) 00:14:24.182 fused_ordering(981) 00:14:24.182 fused_ordering(982) 00:14:24.182 fused_ordering(983) 00:14:24.182 fused_ordering(984) 00:14:24.182 fused_ordering(985) 00:14:24.182 fused_ordering(986) 00:14:24.182 fused_ordering(987) 00:14:24.182 fused_ordering(988) 00:14:24.183 fused_ordering(989) 00:14:24.183 fused_ordering(990) 00:14:24.183 fused_ordering(991) 00:14:24.183 fused_ordering(992) 00:14:24.183 fused_ordering(993) 00:14:24.183 fused_ordering(994) 00:14:24.183 fused_ordering(995) 00:14:24.183 fused_ordering(996) 00:14:24.183 fused_ordering(997) 00:14:24.183 fused_ordering(998) 00:14:24.183 fused_ordering(999) 00:14:24.183 fused_ordering(1000) 00:14:24.183 fused_ordering(1001) 00:14:24.183 fused_ordering(1002) 00:14:24.183 fused_ordering(1003) 00:14:24.183 fused_ordering(1004) 00:14:24.183 fused_ordering(1005) 00:14:24.183 fused_ordering(1006) 00:14:24.183 fused_ordering(1007) 00:14:24.183 fused_ordering(1008) 00:14:24.183 fused_ordering(1009) 00:14:24.183 fused_ordering(1010) 00:14:24.183 fused_ordering(1011) 00:14:24.183 fused_ordering(1012) 00:14:24.183 fused_ordering(1013) 00:14:24.183 fused_ordering(1014) 00:14:24.183 fused_ordering(1015) 00:14:24.183 fused_ordering(1016) 00:14:24.183 fused_ordering(1017) 00:14:24.183 fused_ordering(1018) 00:14:24.183 fused_ordering(1019) 00:14:24.183 fused_ordering(1020) 00:14:24.183 fused_ordering(1021) 00:14:24.183 fused_ordering(1022) 00:14:24.183 fused_ordering(1023) 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:24.183 rmmod nvme_tcp 00:14:24.183 rmmod nvme_fabrics 00:14:24.183 rmmod nvme_keyring 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1093788 ']' 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1093788 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1093788 ']' 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1093788 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1093788 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1093788' 00:14:24.183 killing process with pid 1093788 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1093788 00:14:24.183 01:00:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1093788 00:14:24.441 01:00:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:24.441 01:00:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:24.441 01:00:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:24.441 01:00:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:24.441 01:00:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:24.441 01:00:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.441 01:00:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.441 01:00:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.990 01:00:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:26.990 00:14:26.990 real 0m8.477s 00:14:26.990 user 0m6.014s 00:14:26.990 sys 0m4.275s 00:14:26.990 01:00:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:26.990 01:00:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:26.990 ************************************ 00:14:26.990 END TEST nvmf_fused_ordering 00:14:26.990 ************************************ 00:14:26.990 01:00:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:26.990 01:00:15 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:26.990 01:00:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:26.990 01:00:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.990 01:00:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:26.990 ************************************ 00:14:26.990 START TEST nvmf_delete_subsystem 00:14:26.990 ************************************ 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:26.990 * Looking for test storage... 00:14:26.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:26.990 01:00:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:28.891 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:28.891 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:28.891 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:28.891 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:28.892 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:28.892 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:28.892 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:28.892 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:28.892 01:00:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:28.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:14:28.892 00:14:28.892 --- 10.0.0.2 ping statistics --- 00:14:28.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.892 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:28.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:14:28.892 00:14:28.892 --- 10.0.0.1 ping statistics --- 00:14:28.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.892 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1096152 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1096152 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1096152 ']' 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.892 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:28.892 [2024-07-14 01:00:18.085027] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:28.892 [2024-07-14 01:00:18.085108] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.892 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.892 [2024-07-14 01:00:18.157738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:28.892 [2024-07-14 01:00:18.248314] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.892 [2024-07-14 01:00:18.248377] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.892 [2024-07-14 01:00:18.248394] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.892 [2024-07-14 01:00:18.248408] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.892 [2024-07-14 01:00:18.248420] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.892 [2024-07-14 01:00:18.248513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.893 [2024-07-14 01:00:18.248518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:29.151 [2024-07-14 01:00:18.390255] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:29.151 [2024-07-14 01:00:18.406471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:29.151 NULL1 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:29.151 Delay0 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1096287 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:29.151 01:00:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:29.151 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.151 [2024-07-14 01:00:18.481284] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:31.050 01:00:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:31.050 01:00:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.050 01:00:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 [2024-07-14 01:00:20.612515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2449970 is same with the state(5) to be set 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Write completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 Read completed with error (sct=0, sc=8) 00:14:31.307 starting I/O failed: -6 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 starting I/O failed: -6 00:14:31.308 Write completed with error (sct=0, sc=8) 00:14:31.308 Write completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Write completed with error (sct=0, sc=8) 00:14:31.308 starting I/O failed: -6 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Write completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 starting I/O failed: -6 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 starting I/O failed: -6 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 [2024-07-14 01:00:20.613433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb93c000c00 is same with the state(5) to be set 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Write completed with error (sct=0, sc=8) 00:14:31.308 Write completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Write completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Write completed with error (sct=0, sc=8) 00:14:31.308 Write completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Write completed with error (sct=0, sc=8) 00:14:31.308 Write completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Write completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Write completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Write completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Write completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Write completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:31.308 Write completed with error (sct=0, sc=8) 00:14:31.308 Read completed with error (sct=0, sc=8) 00:14:32.241 [2024-07-14 01:00:21.584856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2457a30 is same with the state(5) to be set 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 [2024-07-14 01:00:21.614567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2449e30 is same with the state(5) to be set 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 [2024-07-14 01:00:21.614759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244a450 is same with the state(5) to be set 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 [2024-07-14 01:00:21.615193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb93c00cfe0 is same with the state(5) to be set 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Write completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 Read completed with error (sct=0, sc=8) 00:14:32.241 [2024-07-14 01:00:21.615896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb93c00d600 is same with the state(5) to be set 00:14:32.241 Initializing NVMe Controllers 00:14:32.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:32.241 Controller IO queue size 128, less than required. 00:14:32.241 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:32.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:32.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:32.241 Initialization complete. Launching workers. 00:14:32.241 ======================================================== 00:14:32.241 Latency(us) 00:14:32.241 Device Information : IOPS MiB/s Average min max 00:14:32.241 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.21 0.09 885740.00 733.16 1012076.42 00:14:32.241 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.69 0.09 881962.70 429.13 1013888.01 00:14:32.241 ======================================================== 00:14:32.241 Total : 350.90 0.17 883837.99 429.13 1013888.01 00:14:32.241 00:14:32.241 [2024-07-14 01:00:21.616444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2457a30 (9): Bad file descriptor 00:14:32.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:32.241 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.241 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:32.241 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1096287 00:14:32.241 01:00:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1096287 00:14:32.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1096287) - No such process 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1096287 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1096287 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1096287 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:32.807 [2024-07-14 01:00:22.140204] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1096695 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1096695 00:14:32.807 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:32.807 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.807 [2024-07-14 01:00:22.203646] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:33.372 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:33.372 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1096695 00:14:33.372 01:00:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:33.937 01:00:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:33.937 01:00:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1096695 00:14:33.937 01:00:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:34.502 01:00:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:34.502 01:00:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1096695 00:14:34.502 01:00:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:34.760 01:00:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:34.760 01:00:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1096695 00:14:34.760 01:00:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:35.326 01:00:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:35.326 01:00:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1096695 00:14:35.326 01:00:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:35.892 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:35.892 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1096695 00:14:35.892 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:36.150 Initializing NVMe Controllers 00:14:36.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:36.150 Controller IO queue size 128, less than required. 00:14:36.150 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:36.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:36.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:36.150 Initialization complete. Launching workers. 00:14:36.150 ======================================================== 00:14:36.150 Latency(us) 00:14:36.150 Device Information : IOPS MiB/s Average min max 00:14:36.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004632.46 1000229.21 1013867.31 00:14:36.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004872.85 1000269.78 1042359.71 00:14:36.150 ======================================================== 00:14:36.150 Total : 256.00 0.12 1004752.65 1000229.21 1042359.71 00:14:36.150 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1096695 00:14:36.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1096695) - No such process 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1096695 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:36.410 rmmod nvme_tcp 00:14:36.410 rmmod nvme_fabrics 00:14:36.410 rmmod nvme_keyring 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1096152 ']' 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1096152 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1096152 ']' 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1096152 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1096152 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1096152' 00:14:36.410 killing process with pid 1096152 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1096152 00:14:36.410 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1096152 00:14:36.669 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:36.669 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:36.669 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:36.669 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:36.669 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:36.669 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.669 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.669 01:00:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.203 01:00:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:39.203 00:14:39.203 real 0m12.215s 00:14:39.203 user 0m27.694s 00:14:39.203 sys 0m2.975s 00:14:39.203 01:00:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:39.203 01:00:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:39.203 ************************************ 00:14:39.203 END TEST nvmf_delete_subsystem 00:14:39.203 ************************************ 00:14:39.203 01:00:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:39.203 01:00:28 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:39.203 01:00:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:39.203 01:00:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:39.203 01:00:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:39.203 ************************************ 00:14:39.203 START TEST nvmf_ns_masking 00:14:39.203 ************************************ 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:39.203 * Looking for test storage... 00:14:39.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.203 01:00:28 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e0702fae-fabc-4e51-8413-c4d58021612f 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a7b1bde6-e450-49fc-8498-039ba1edde40 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c7f41a38-47f6-4019-9a16-32477b304e03 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:39.204 01:00:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:41.157 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:41.157 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:41.157 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.157 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:41.158 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:41.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:14:41.158 00:14:41.158 --- 10.0.0.2 ping statistics --- 00:14:41.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.158 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:14:41.158 00:14:41.158 --- 10.0.0.1 ping statistics --- 00:14:41.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.158 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1099038 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1099038 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1099038 ']' 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:41.158 [2024-07-14 01:00:30.281564] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:41.158 [2024-07-14 01:00:30.281648] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.158 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.158 [2024-07-14 01:00:30.346186] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.158 [2024-07-14 01:00:30.436674] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.158 [2024-07-14 01:00:30.436731] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.158 [2024-07-14 01:00:30.436745] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.158 [2024-07-14 01:00:30.436756] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.158 [2024-07-14 01:00:30.436765] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.158 [2024-07-14 01:00:30.436792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:41.158 01:00:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:41.416 01:00:30 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.416 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:41.416 [2024-07-14 01:00:30.801398] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.416 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:41.416 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:41.416 01:00:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:41.675 Malloc1 00:14:41.932 01:00:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:42.190 Malloc2 00:14:42.190 01:00:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:42.447 01:00:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:42.706 01:00:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.964 [2024-07-14 01:00:32.171634] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.964 01:00:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:42.964 01:00:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c7f41a38-47f6-4019-9a16-32477b304e03 -a 10.0.0.2 -s 4420 -i 4 00:14:42.964 01:00:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:42.964 01:00:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:42.964 01:00:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:42.964 01:00:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:42.964 01:00:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:45.488 [ 0]:0x1 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=38545283571641e1a2ce14dd8cbb43ac 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 38545283571641e1a2ce14dd8cbb43ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:45.488 [ 0]:0x1 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=38545283571641e1a2ce14dd8cbb43ac 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 38545283571641e1a2ce14dd8cbb43ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:45.488 [ 1]:0x2 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b7e762eaa76648558d37d812efbd055d 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b7e762eaa76648558d37d812efbd055d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:45.488 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:45.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.746 01:00:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.746 01:00:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:46.004 01:00:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:46.004 01:00:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c7f41a38-47f6-4019-9a16-32477b304e03 -a 10.0.0.2 -s 4420 -i 4 00:14:46.262 01:00:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:46.262 01:00:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:46.262 01:00:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:46.262 01:00:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:46.262 01:00:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:46.262 01:00:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:48.792 [ 0]:0x2 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b7e762eaa76648558d37d812efbd055d 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b7e762eaa76648558d37d812efbd055d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:48.792 [ 0]:0x1 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:48.792 01:00:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.792 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=38545283571641e1a2ce14dd8cbb43ac 00:14:48.792 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 38545283571641e1a2ce14dd8cbb43ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.792 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:48.792 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:48.792 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:48.792 [ 1]:0x2 00:14:48.792 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:48.792 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:48.792 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b7e762eaa76648558d37d812efbd055d 00:14:48.792 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b7e762eaa76648558d37d812efbd055d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:48.792 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:49.051 [ 0]:0x2 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b7e762eaa76648558d37d812efbd055d 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b7e762eaa76648558d37d812efbd055d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:49.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.051 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:49.309 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:49.309 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c7f41a38-47f6-4019-9a16-32477b304e03 -a 10.0.0.2 -s 4420 -i 4 00:14:49.567 01:00:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:49.567 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:49.567 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:49.567 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:49.567 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:49.567 01:00:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:51.464 01:00:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:51.464 01:00:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:51.464 01:00:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:51.464 01:00:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:51.464 01:00:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:51.464 01:00:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:51.464 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:51.464 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:51.722 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:51.722 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:51.722 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:51.722 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.722 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:51.722 [ 0]:0x1 00:14:51.722 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:51.722 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.722 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=38545283571641e1a2ce14dd8cbb43ac 00:14:51.722 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 38545283571641e1a2ce14dd8cbb43ac != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.722 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:51.722 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.722 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:51.722 [ 1]:0x2 00:14:51.722 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:51.722 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.722 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b7e762eaa76648558d37d812efbd055d 00:14:51.722 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b7e762eaa76648558d37d812efbd055d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.722 01:00:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:51.980 [ 0]:0x2 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b7e762eaa76648558d37d812efbd055d 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b7e762eaa76648558d37d812efbd055d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:51.980 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:52.238 [2024-07-14 01:00:41.556261] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:52.238 request: 00:14:52.238 { 00:14:52.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.238 "nsid": 2, 00:14:52.238 "host": "nqn.2016-06.io.spdk:host1", 00:14:52.238 "method": "nvmf_ns_remove_host", 00:14:52.238 "req_id": 1 00:14:52.238 } 00:14:52.238 Got JSON-RPC error response 00:14:52.238 response: 00:14:52.238 { 00:14:52.238 "code": -32602, 00:14:52.238 "message": "Invalid parameters" 00:14:52.238 } 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:52.238 [ 0]:0x2 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:52.238 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:52.497 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b7e762eaa76648558d37d812efbd055d 00:14:52.497 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b7e762eaa76648558d37d812efbd055d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:52.497 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:52.497 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:52.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.497 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1100523 00:14:52.497 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:52.497 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.497 01:00:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1100523 /var/tmp/host.sock 00:14:52.497 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1100523 ']' 00:14:52.497 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:52.497 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:52.497 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:52.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:52.497 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:52.497 01:00:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:52.497 [2024-07-14 01:00:41.770828] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:52.497 [2024-07-14 01:00:41.770949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1100523 ] 00:14:52.497 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.497 [2024-07-14 01:00:41.834784] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.755 [2024-07-14 01:00:41.926390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.013 01:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.013 01:00:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:53.013 01:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.271 01:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:53.529 01:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e0702fae-fabc-4e51-8413-c4d58021612f 00:14:53.529 01:00:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:53.529 01:00:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E0702FAEFABC4E518413C4D58021612F -i 00:14:53.786 01:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a7b1bde6-e450-49fc-8498-039ba1edde40 00:14:53.786 01:00:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:53.786 01:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A7B1BDE6E45049FC8498039BA1EDDE40 -i 00:14:54.043 01:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:54.299 01:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:54.556 01:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:54.556 01:00:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:54.814 nvme0n1 00:14:54.814 01:00:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:54.814 01:00:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:55.423 nvme1n2 00:14:55.423 01:00:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:55.423 01:00:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:55.423 01:00:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:55.423 01:00:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:55.423 01:00:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:55.423 01:00:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:55.423 01:00:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:55.423 01:00:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:55.423 01:00:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:55.680 01:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e0702fae-fabc-4e51-8413-c4d58021612f == \e\0\7\0\2\f\a\e\-\f\a\b\c\-\4\e\5\1\-\8\4\1\3\-\c\4\d\5\8\0\2\1\6\1\2\f ]] 00:14:55.680 01:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:55.680 01:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:55.680 01:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:55.937 01:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a7b1bde6-e450-49fc-8498-039ba1edde40 == \a\7\b\1\b\d\e\6\-\e\4\5\0\-\4\9\f\c\-\8\4\9\8\-\0\3\9\b\a\1\e\d\d\e\4\0 ]] 00:14:55.937 01:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1100523 00:14:55.937 01:00:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1100523 ']' 00:14:55.937 01:00:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1100523 00:14:55.937 01:00:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:55.937 01:00:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.937 01:00:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1100523 00:14:55.937 01:00:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:55.937 01:00:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:55.937 01:00:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1100523' 00:14:55.937 killing process with pid 1100523 00:14:55.937 01:00:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1100523 00:14:55.937 01:00:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1100523 00:14:56.502 01:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:56.760 01:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:56.760 01:00:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:56.760 01:00:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:56.760 01:00:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:56.760 01:00:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:56.760 01:00:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:56.760 01:00:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:56.760 01:00:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:56.760 rmmod nvme_tcp 00:14:56.760 rmmod nvme_fabrics 00:14:56.760 rmmod nvme_keyring 00:14:56.760 01:00:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:56.760 01:00:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:56.760 01:00:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:56.760 01:00:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1099038 ']' 00:14:56.760 01:00:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1099038 00:14:56.760 01:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1099038 ']' 00:14:56.760 01:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1099038 00:14:56.760 01:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:56.760 01:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:56.760 01:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1099038 00:14:56.760 01:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:56.760 01:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:56.760 01:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1099038' 00:14:56.760 killing process with pid 1099038 00:14:56.760 01:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1099038 00:14:56.760 01:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1099038 00:14:57.018 01:00:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:57.018 01:00:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:57.018 01:00:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:57.018 01:00:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.018 01:00:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:57.018 01:00:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.018 01:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.018 01:00:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.553 01:00:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:59.553 00:14:59.553 real 0m20.296s 00:14:59.553 user 0m26.524s 00:14:59.553 sys 0m3.953s 00:14:59.553 01:00:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:59.553 01:00:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:59.553 ************************************ 00:14:59.553 END TEST nvmf_ns_masking 00:14:59.553 ************************************ 00:14:59.553 01:00:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:59.553 01:00:48 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:59.553 01:00:48 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:59.553 01:00:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:59.553 01:00:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.553 01:00:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:59.553 ************************************ 00:14:59.553 START TEST nvmf_nvme_cli 00:14:59.553 ************************************ 00:14:59.553 01:00:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:59.553 * Looking for test storage... 00:14:59.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:59.553 01:00:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:59.553 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:59.553 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.553 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.553 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.553 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:59.554 01:00:48 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.455 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:01.456 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:01.456 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:01.456 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:01.456 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:01.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:15:01.456 00:15:01.456 --- 10.0.0.2 ping statistics --- 00:15:01.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.456 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:01.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:15:01.456 00:15:01.456 --- 10.0.0.1 ping statistics --- 00:15:01.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.456 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1103015 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1103015 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1103015 ']' 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.456 01:00:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.456 [2024-07-14 01:00:50.753581] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:01.456 [2024-07-14 01:00:50.753666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.456 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.456 [2024-07-14 01:00:50.823811] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:01.720 [2024-07-14 01:00:50.920549] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.720 [2024-07-14 01:00:50.920611] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.720 [2024-07-14 01:00:50.920627] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.720 [2024-07-14 01:00:50.920641] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.720 [2024-07-14 01:00:50.920653] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.720 [2024-07-14 01:00:50.920710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.720 [2024-07-14 01:00:50.920763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.720 [2024-07-14 01:00:50.921121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:01.720 [2024-07-14 01:00:50.921126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.720 [2024-07-14 01:00:51.079689] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.720 Malloc0 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.720 Malloc1 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:01.720 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.979 [2024-07-14 01:00:51.161270] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:15:01.979 00:15:01.979 Discovery Log Number of Records 2, Generation counter 2 00:15:01.979 =====Discovery Log Entry 0====== 00:15:01.979 trtype: tcp 00:15:01.979 adrfam: ipv4 00:15:01.979 subtype: current discovery subsystem 00:15:01.979 treq: not required 00:15:01.979 portid: 0 00:15:01.979 trsvcid: 4420 00:15:01.979 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:01.979 traddr: 10.0.0.2 00:15:01.979 eflags: explicit discovery connections, duplicate discovery information 00:15:01.979 sectype: none 00:15:01.979 =====Discovery Log Entry 1====== 00:15:01.979 trtype: tcp 00:15:01.979 adrfam: ipv4 00:15:01.979 subtype: nvme subsystem 00:15:01.979 treq: not required 00:15:01.979 portid: 0 00:15:01.979 trsvcid: 4420 00:15:01.979 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:01.979 traddr: 10.0.0.2 00:15:01.979 eflags: none 00:15:01.979 sectype: none 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:01.979 01:00:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:02.546 01:00:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:02.546 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:15:02.546 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:02.546 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:02.546 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:02.546 01:00:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:05.074 01:00:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:05.074 01:00:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:05.074 01:00:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:05.074 01:00:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:05.074 01:00:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:05.074 01:00:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:05.074 01:00:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:05.074 01:00:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:05.074 01:00:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.074 01:00:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:05.074 /dev/nvme0n1 ]] 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.074 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:05.075 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:05.075 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.075 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:05.075 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:05.075 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:05.075 01:00:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:05.075 01:00:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:05.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:05.332 rmmod nvme_tcp 00:15:05.332 rmmod nvme_fabrics 00:15:05.332 rmmod nvme_keyring 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1103015 ']' 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1103015 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1103015 ']' 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1103015 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1103015 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1103015' 00:15:05.332 killing process with pid 1103015 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1103015 00:15:05.332 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1103015 00:15:05.590 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:05.590 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:05.590 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:05.590 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.590 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:05.590 01:00:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.590 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.590 01:00:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.126 01:00:56 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:08.126 00:15:08.126 real 0m8.547s 00:15:08.126 user 0m16.416s 00:15:08.126 sys 0m2.208s 00:15:08.126 01:00:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:08.126 01:00:56 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:08.126 ************************************ 00:15:08.126 END TEST nvmf_nvme_cli 00:15:08.126 ************************************ 00:15:08.126 01:00:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:08.126 01:00:57 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:08.126 01:00:57 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:08.126 01:00:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:08.126 01:00:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.126 01:00:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:08.126 ************************************ 00:15:08.126 START TEST nvmf_vfio_user 00:15:08.126 ************************************ 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:08.126 * Looking for test storage... 00:15:08.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1103947 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1103947' 00:15:08.126 Process pid: 1103947 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1103947 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1103947 ']' 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.126 01:00:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.127 01:00:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.127 01:00:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.127 01:00:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:08.127 [2024-07-14 01:00:57.143824] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:08.127 [2024-07-14 01:00:57.143923] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.127 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.127 [2024-07-14 01:00:57.206784] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:08.127 [2024-07-14 01:00:57.302272] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.127 [2024-07-14 01:00:57.302343] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.127 [2024-07-14 01:00:57.302359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.127 [2024-07-14 01:00:57.302374] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.127 [2024-07-14 01:00:57.302386] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.127 [2024-07-14 01:00:57.302445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.127 [2024-07-14 01:00:57.302499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.127 [2024-07-14 01:00:57.302552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:08.127 [2024-07-14 01:00:57.302555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.127 01:00:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.127 01:00:57 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:08.127 01:00:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:09.061 01:00:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:09.319 01:00:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:09.319 01:00:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:09.319 01:00:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:09.319 01:00:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:09.319 01:00:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:09.577 Malloc1 00:15:09.577 01:00:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:09.835 01:00:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:10.093 01:00:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:10.351 01:00:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:10.351 01:00:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:10.351 01:00:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:10.609 Malloc2 00:15:10.609 01:00:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:10.867 01:01:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:11.124 01:01:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:11.382 01:01:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:11.382 01:01:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:11.382 01:01:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:11.382 01:01:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:11.382 01:01:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:11.382 01:01:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:11.382 [2024-07-14 01:01:00.742340] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:11.382 [2024-07-14 01:01:00.742381] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1104362 ] 00:15:11.382 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.382 [2024-07-14 01:01:00.774231] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:11.382 [2024-07-14 01:01:00.783313] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:11.382 [2024-07-14 01:01:00.783341] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc3c5e5b000 00:15:11.382 [2024-07-14 01:01:00.784308] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.382 [2024-07-14 01:01:00.785300] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.382 [2024-07-14 01:01:00.786304] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.382 [2024-07-14 01:01:00.787312] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:11.382 [2024-07-14 01:01:00.788317] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:11.382 [2024-07-14 01:01:00.789317] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.382 [2024-07-14 01:01:00.790329] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:11.382 [2024-07-14 01:01:00.791330] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:11.382 [2024-07-14 01:01:00.792339] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:11.382 [2024-07-14 01:01:00.792363] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc3c4c0f000 00:15:11.382 [2024-07-14 01:01:00.793533] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:11.644 [2024-07-14 01:01:00.809337] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:11.644 [2024-07-14 01:01:00.809381] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:11.644 [2024-07-14 01:01:00.811468] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:11.644 [2024-07-14 01:01:00.811525] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:11.644 [2024-07-14 01:01:00.811615] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:11.644 [2024-07-14 01:01:00.811649] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:11.644 [2024-07-14 01:01:00.811660] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:11.644 [2024-07-14 01:01:00.812877] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:11.644 [2024-07-14 01:01:00.812899] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:11.644 [2024-07-14 01:01:00.812912] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:11.644 [2024-07-14 01:01:00.813453] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:11.644 [2024-07-14 01:01:00.813471] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:11.644 [2024-07-14 01:01:00.813485] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:11.644 [2024-07-14 01:01:00.814456] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:11.644 [2024-07-14 01:01:00.814475] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:11.644 [2024-07-14 01:01:00.815459] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:11.644 [2024-07-14 01:01:00.815478] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:11.644 [2024-07-14 01:01:00.815486] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:11.644 [2024-07-14 01:01:00.815497] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:11.644 [2024-07-14 01:01:00.815608] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:11.644 [2024-07-14 01:01:00.815615] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:11.644 [2024-07-14 01:01:00.815624] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:11.644 [2024-07-14 01:01:00.816468] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:11.644 [2024-07-14 01:01:00.817468] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:11.644 [2024-07-14 01:01:00.819879] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:11.644 [2024-07-14 01:01:00.820488] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.644 [2024-07-14 01:01:00.820624] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:11.644 [2024-07-14 01:01:00.821493] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:11.644 [2024-07-14 01:01:00.821512] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:11.644 [2024-07-14 01:01:00.821521] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:11.644 [2024-07-14 01:01:00.821545] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:11.644 [2024-07-14 01:01:00.821562] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:11.644 [2024-07-14 01:01:00.821592] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:11.644 [2024-07-14 01:01:00.821602] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.644 [2024-07-14 01:01:00.821626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:11.644 [2024-07-14 01:01:00.821691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:11.644 [2024-07-14 01:01:00.821710] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:11.644 [2024-07-14 01:01:00.821722] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:11.644 [2024-07-14 01:01:00.821729] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:11.645 [2024-07-14 01:01:00.821737] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:11.645 [2024-07-14 01:01:00.821744] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:11.645 [2024-07-14 01:01:00.821753] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:11.645 [2024-07-14 01:01:00.821760] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.821773] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.821788] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:11.645 [2024-07-14 01:01:00.821804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:11.645 [2024-07-14 01:01:00.821828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.645 [2024-07-14 01:01:00.821841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.645 [2024-07-14 01:01:00.821875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.645 [2024-07-14 01:01:00.821889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:11.645 [2024-07-14 01:01:00.821898] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.821915] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.821930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:11.645 [2024-07-14 01:01:00.821946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:11.645 [2024-07-14 01:01:00.821958] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:11.645 [2024-07-14 01:01:00.821967] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.821978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.821989] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.822002] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:11.645 [2024-07-14 01:01:00.822014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:11.645 [2024-07-14 01:01:00.822081] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.822096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.822110] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:11.645 [2024-07-14 01:01:00.822119] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:11.645 [2024-07-14 01:01:00.822129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:11.645 [2024-07-14 01:01:00.822143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:11.645 [2024-07-14 01:01:00.822177] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:11.645 [2024-07-14 01:01:00.822194] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.822209] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.822220] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:11.645 [2024-07-14 01:01:00.822228] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.645 [2024-07-14 01:01:00.822237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:11.645 [2024-07-14 01:01:00.822263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:11.645 [2024-07-14 01:01:00.822287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.822301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.822312] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:11.645 [2024-07-14 01:01:00.822320] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.645 [2024-07-14 01:01:00.822329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:11.645 [2024-07-14 01:01:00.822344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:11.645 [2024-07-14 01:01:00.822358] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.822369] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.822383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.822393] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.822402] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.822410] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.822419] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:11.645 [2024-07-14 01:01:00.822426] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:11.645 [2024-07-14 01:01:00.822435] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:11.645 [2024-07-14 01:01:00.822461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:11.645 [2024-07-14 01:01:00.822479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:11.645 [2024-07-14 01:01:00.822497] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:11.645 [2024-07-14 01:01:00.822508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:11.645 [2024-07-14 01:01:00.822523] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:11.645 [2024-07-14 01:01:00.822537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:11.645 [2024-07-14 01:01:00.822552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:11.645 [2024-07-14 01:01:00.822562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:11.645 [2024-07-14 01:01:00.822584] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:11.645 [2024-07-14 01:01:00.822594] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:11.645 [2024-07-14 01:01:00.822599] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:11.645 [2024-07-14 01:01:00.822605] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:11.645 [2024-07-14 01:01:00.822614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:11.645 [2024-07-14 01:01:00.822625] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:11.645 [2024-07-14 01:01:00.822633] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:11.645 [2024-07-14 01:01:00.822642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:11.645 [2024-07-14 01:01:00.822655] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:11.646 [2024-07-14 01:01:00.822663] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:11.646 [2024-07-14 01:01:00.822672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:11.646 [2024-07-14 01:01:00.822683] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:11.646 [2024-07-14 01:01:00.822691] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:11.646 [2024-07-14 01:01:00.822700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:11.646 [2024-07-14 01:01:00.822710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:11.646 [2024-07-14 01:01:00.822729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:11.646 [2024-07-14 01:01:00.822746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:11.646 [2024-07-14 01:01:00.822758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:11.646 ===================================================== 00:15:11.646 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:11.646 ===================================================== 00:15:11.646 Controller Capabilities/Features 00:15:11.646 ================================ 00:15:11.646 Vendor ID: 4e58 00:15:11.646 Subsystem Vendor ID: 4e58 00:15:11.646 Serial Number: SPDK1 00:15:11.646 Model Number: SPDK bdev Controller 00:15:11.646 Firmware Version: 24.09 00:15:11.646 Recommended Arb Burst: 6 00:15:11.646 IEEE OUI Identifier: 8d 6b 50 00:15:11.646 Multi-path I/O 00:15:11.646 May have multiple subsystem ports: Yes 00:15:11.646 May have multiple controllers: Yes 00:15:11.646 Associated with SR-IOV VF: No 00:15:11.646 Max Data Transfer Size: 131072 00:15:11.646 Max Number of Namespaces: 32 00:15:11.646 Max Number of I/O Queues: 127 00:15:11.646 NVMe Specification Version (VS): 1.3 00:15:11.646 NVMe Specification Version (Identify): 1.3 00:15:11.646 Maximum Queue Entries: 256 00:15:11.646 Contiguous Queues Required: Yes 00:15:11.646 Arbitration Mechanisms Supported 00:15:11.646 Weighted Round Robin: Not Supported 00:15:11.646 Vendor Specific: Not Supported 00:15:11.646 Reset Timeout: 15000 ms 00:15:11.646 Doorbell Stride: 4 bytes 00:15:11.646 NVM Subsystem Reset: Not Supported 00:15:11.646 Command Sets Supported 00:15:11.646 NVM Command Set: Supported 00:15:11.646 Boot Partition: Not Supported 00:15:11.646 Memory Page Size Minimum: 4096 bytes 00:15:11.646 Memory Page Size Maximum: 4096 bytes 00:15:11.646 Persistent Memory Region: Not Supported 00:15:11.646 Optional Asynchronous Events Supported 00:15:11.646 Namespace Attribute Notices: Supported 00:15:11.646 Firmware Activation Notices: Not Supported 00:15:11.646 ANA Change Notices: Not Supported 00:15:11.646 PLE Aggregate Log Change Notices: Not Supported 00:15:11.646 LBA Status Info Alert Notices: Not Supported 00:15:11.646 EGE Aggregate Log Change Notices: Not Supported 00:15:11.646 Normal NVM Subsystem Shutdown event: Not Supported 00:15:11.646 Zone Descriptor Change Notices: Not Supported 00:15:11.646 Discovery Log Change Notices: Not Supported 00:15:11.646 Controller Attributes 00:15:11.646 128-bit Host Identifier: Supported 00:15:11.646 Non-Operational Permissive Mode: Not Supported 00:15:11.646 NVM Sets: Not Supported 00:15:11.646 Read Recovery Levels: Not Supported 00:15:11.646 Endurance Groups: Not Supported 00:15:11.646 Predictable Latency Mode: Not Supported 00:15:11.646 Traffic Based Keep ALive: Not Supported 00:15:11.646 Namespace Granularity: Not Supported 00:15:11.646 SQ Associations: Not Supported 00:15:11.646 UUID List: Not Supported 00:15:11.646 Multi-Domain Subsystem: Not Supported 00:15:11.646 Fixed Capacity Management: Not Supported 00:15:11.646 Variable Capacity Management: Not Supported 00:15:11.646 Delete Endurance Group: Not Supported 00:15:11.646 Delete NVM Set: Not Supported 00:15:11.646 Extended LBA Formats Supported: Not Supported 00:15:11.646 Flexible Data Placement Supported: Not Supported 00:15:11.646 00:15:11.646 Controller Memory Buffer Support 00:15:11.646 ================================ 00:15:11.646 Supported: No 00:15:11.646 00:15:11.646 Persistent Memory Region Support 00:15:11.646 ================================ 00:15:11.646 Supported: No 00:15:11.646 00:15:11.646 Admin Command Set Attributes 00:15:11.646 ============================ 00:15:11.646 Security Send/Receive: Not Supported 00:15:11.646 Format NVM: Not Supported 00:15:11.646 Firmware Activate/Download: Not Supported 00:15:11.646 Namespace Management: Not Supported 00:15:11.646 Device Self-Test: Not Supported 00:15:11.646 Directives: Not Supported 00:15:11.646 NVMe-MI: Not Supported 00:15:11.646 Virtualization Management: Not Supported 00:15:11.646 Doorbell Buffer Config: Not Supported 00:15:11.646 Get LBA Status Capability: Not Supported 00:15:11.646 Command & Feature Lockdown Capability: Not Supported 00:15:11.646 Abort Command Limit: 4 00:15:11.646 Async Event Request Limit: 4 00:15:11.646 Number of Firmware Slots: N/A 00:15:11.646 Firmware Slot 1 Read-Only: N/A 00:15:11.646 Firmware Activation Without Reset: N/A 00:15:11.646 Multiple Update Detection Support: N/A 00:15:11.646 Firmware Update Granularity: No Information Provided 00:15:11.646 Per-Namespace SMART Log: No 00:15:11.646 Asymmetric Namespace Access Log Page: Not Supported 00:15:11.646 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:11.646 Command Effects Log Page: Supported 00:15:11.646 Get Log Page Extended Data: Supported 00:15:11.646 Telemetry Log Pages: Not Supported 00:15:11.646 Persistent Event Log Pages: Not Supported 00:15:11.646 Supported Log Pages Log Page: May Support 00:15:11.646 Commands Supported & Effects Log Page: Not Supported 00:15:11.646 Feature Identifiers & Effects Log Page:May Support 00:15:11.646 NVMe-MI Commands & Effects Log Page: May Support 00:15:11.646 Data Area 4 for Telemetry Log: Not Supported 00:15:11.646 Error Log Page Entries Supported: 128 00:15:11.646 Keep Alive: Supported 00:15:11.646 Keep Alive Granularity: 10000 ms 00:15:11.646 00:15:11.646 NVM Command Set Attributes 00:15:11.646 ========================== 00:15:11.646 Submission Queue Entry Size 00:15:11.646 Max: 64 00:15:11.646 Min: 64 00:15:11.646 Completion Queue Entry Size 00:15:11.646 Max: 16 00:15:11.646 Min: 16 00:15:11.646 Number of Namespaces: 32 00:15:11.646 Compare Command: Supported 00:15:11.646 Write Uncorrectable Command: Not Supported 00:15:11.646 Dataset Management Command: Supported 00:15:11.646 Write Zeroes Command: Supported 00:15:11.646 Set Features Save Field: Not Supported 00:15:11.646 Reservations: Not Supported 00:15:11.646 Timestamp: Not Supported 00:15:11.646 Copy: Supported 00:15:11.646 Volatile Write Cache: Present 00:15:11.646 Atomic Write Unit (Normal): 1 00:15:11.646 Atomic Write Unit (PFail): 1 00:15:11.646 Atomic Compare & Write Unit: 1 00:15:11.646 Fused Compare & Write: Supported 00:15:11.646 Scatter-Gather List 00:15:11.647 SGL Command Set: Supported (Dword aligned) 00:15:11.647 SGL Keyed: Not Supported 00:15:11.647 SGL Bit Bucket Descriptor: Not Supported 00:15:11.647 SGL Metadata Pointer: Not Supported 00:15:11.647 Oversized SGL: Not Supported 00:15:11.647 SGL Metadata Address: Not Supported 00:15:11.647 SGL Offset: Not Supported 00:15:11.647 Transport SGL Data Block: Not Supported 00:15:11.647 Replay Protected Memory Block: Not Supported 00:15:11.647 00:15:11.647 Firmware Slot Information 00:15:11.647 ========================= 00:15:11.647 Active slot: 1 00:15:11.647 Slot 1 Firmware Revision: 24.09 00:15:11.647 00:15:11.647 00:15:11.647 Commands Supported and Effects 00:15:11.647 ============================== 00:15:11.647 Admin Commands 00:15:11.647 -------------- 00:15:11.647 Get Log Page (02h): Supported 00:15:11.647 Identify (06h): Supported 00:15:11.647 Abort (08h): Supported 00:15:11.647 Set Features (09h): Supported 00:15:11.647 Get Features (0Ah): Supported 00:15:11.647 Asynchronous Event Request (0Ch): Supported 00:15:11.647 Keep Alive (18h): Supported 00:15:11.647 I/O Commands 00:15:11.647 ------------ 00:15:11.647 Flush (00h): Supported LBA-Change 00:15:11.647 Write (01h): Supported LBA-Change 00:15:11.647 Read (02h): Supported 00:15:11.647 Compare (05h): Supported 00:15:11.647 Write Zeroes (08h): Supported LBA-Change 00:15:11.647 Dataset Management (09h): Supported LBA-Change 00:15:11.647 Copy (19h): Supported LBA-Change 00:15:11.647 00:15:11.647 Error Log 00:15:11.647 ========= 00:15:11.647 00:15:11.647 Arbitration 00:15:11.647 =========== 00:15:11.647 Arbitration Burst: 1 00:15:11.647 00:15:11.647 Power Management 00:15:11.647 ================ 00:15:11.647 Number of Power States: 1 00:15:11.647 Current Power State: Power State #0 00:15:11.647 Power State #0: 00:15:11.647 Max Power: 0.00 W 00:15:11.647 Non-Operational State: Operational 00:15:11.647 Entry Latency: Not Reported 00:15:11.647 Exit Latency: Not Reported 00:15:11.647 Relative Read Throughput: 0 00:15:11.647 Relative Read Latency: 0 00:15:11.647 Relative Write Throughput: 0 00:15:11.647 Relative Write Latency: 0 00:15:11.647 Idle Power: Not Reported 00:15:11.647 Active Power: Not Reported 00:15:11.647 Non-Operational Permissive Mode: Not Supported 00:15:11.647 00:15:11.647 Health Information 00:15:11.647 ================== 00:15:11.647 Critical Warnings: 00:15:11.647 Available Spare Space: OK 00:15:11.647 Temperature: OK 00:15:11.647 Device Reliability: OK 00:15:11.647 Read Only: No 00:15:11.647 Volatile Memory Backup: OK 00:15:11.647 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:11.647 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:11.647 Available Spare: 0% 00:15:11.647 Available Sp[2024-07-14 01:01:00.822899] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:11.647 [2024-07-14 01:01:00.822917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:11.647 [2024-07-14 01:01:00.822964] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:11.647 [2024-07-14 01:01:00.822982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.647 [2024-07-14 01:01:00.822993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.647 [2024-07-14 01:01:00.823004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.647 [2024-07-14 01:01:00.823014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:11.647 [2024-07-14 01:01:00.824879] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:11.647 [2024-07-14 01:01:00.824901] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:11.647 [2024-07-14 01:01:00.825525] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.647 [2024-07-14 01:01:00.825619] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:11.647 [2024-07-14 01:01:00.825634] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:11.647 [2024-07-14 01:01:00.826525] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:11.647 [2024-07-14 01:01:00.826548] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:11.647 [2024-07-14 01:01:00.826604] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:11.647 [2024-07-14 01:01:00.829878] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:11.647 are Threshold: 0% 00:15:11.647 Life Percentage Used: 0% 00:15:11.647 Data Units Read: 0 00:15:11.647 Data Units Written: 0 00:15:11.647 Host Read Commands: 0 00:15:11.647 Host Write Commands: 0 00:15:11.647 Controller Busy Time: 0 minutes 00:15:11.647 Power Cycles: 0 00:15:11.647 Power On Hours: 0 hours 00:15:11.647 Unsafe Shutdowns: 0 00:15:11.647 Unrecoverable Media Errors: 0 00:15:11.647 Lifetime Error Log Entries: 0 00:15:11.647 Warning Temperature Time: 0 minutes 00:15:11.647 Critical Temperature Time: 0 minutes 00:15:11.647 00:15:11.647 Number of Queues 00:15:11.647 ================ 00:15:11.647 Number of I/O Submission Queues: 127 00:15:11.647 Number of I/O Completion Queues: 127 00:15:11.647 00:15:11.647 Active Namespaces 00:15:11.647 ================= 00:15:11.647 Namespace ID:1 00:15:11.647 Error Recovery Timeout: Unlimited 00:15:11.647 Command Set Identifier: NVM (00h) 00:15:11.647 Deallocate: Supported 00:15:11.647 Deallocated/Unwritten Error: Not Supported 00:15:11.647 Deallocated Read Value: Unknown 00:15:11.647 Deallocate in Write Zeroes: Not Supported 00:15:11.647 Deallocated Guard Field: 0xFFFF 00:15:11.647 Flush: Supported 00:15:11.647 Reservation: Supported 00:15:11.647 Namespace Sharing Capabilities: Multiple Controllers 00:15:11.647 Size (in LBAs): 131072 (0GiB) 00:15:11.647 Capacity (in LBAs): 131072 (0GiB) 00:15:11.647 Utilization (in LBAs): 131072 (0GiB) 00:15:11.647 NGUID: DFB3C858D8644D4B8F1BC0EE3549A923 00:15:11.647 UUID: dfb3c858-d864-4d4b-8f1b-c0ee3549a923 00:15:11.647 Thin Provisioning: Not Supported 00:15:11.647 Per-NS Atomic Units: Yes 00:15:11.647 Atomic Boundary Size (Normal): 0 00:15:11.647 Atomic Boundary Size (PFail): 0 00:15:11.648 Atomic Boundary Offset: 0 00:15:11.648 Maximum Single Source Range Length: 65535 00:15:11.648 Maximum Copy Length: 65535 00:15:11.648 Maximum Source Range Count: 1 00:15:11.648 NGUID/EUI64 Never Reused: No 00:15:11.648 Namespace Write Protected: No 00:15:11.648 Number of LBA Formats: 1 00:15:11.648 Current LBA Format: LBA Format #00 00:15:11.648 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:11.648 00:15:11.648 01:01:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:11.648 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.952 [2024-07-14 01:01:01.059730] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.221 Initializing NVMe Controllers 00:15:17.221 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:17.221 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:17.221 Initialization complete. Launching workers. 00:15:17.221 ======================================================== 00:15:17.221 Latency(us) 00:15:17.221 Device Information : IOPS MiB/s Average min max 00:15:17.221 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34332.75 134.11 3727.48 1180.54 7366.87 00:15:17.221 ======================================================== 00:15:17.221 Total : 34332.75 134.11 3727.48 1180.54 7366.87 00:15:17.221 00:15:17.221 [2024-07-14 01:01:06.078312] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:17.221 01:01:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:17.221 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.221 [2024-07-14 01:01:06.318450] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:22.479 Initializing NVMe Controllers 00:15:22.479 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:22.479 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:22.479 Initialization complete. Launching workers. 00:15:22.479 ======================================================== 00:15:22.479 Latency(us) 00:15:22.479 Device Information : IOPS MiB/s Average min max 00:15:22.479 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16038.13 62.65 7986.22 7738.38 11970.69 00:15:22.479 ======================================================== 00:15:22.479 Total : 16038.13 62.65 7986.22 7738.38 11970.69 00:15:22.479 00:15:22.479 [2024-07-14 01:01:11.356917] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:22.479 01:01:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:22.479 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.479 [2024-07-14 01:01:11.560928] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:27.744 [2024-07-14 01:01:16.627195] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:27.744 Initializing NVMe Controllers 00:15:27.744 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:27.744 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:27.744 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:27.744 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:27.744 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:27.744 Initialization complete. Launching workers. 00:15:27.744 Starting thread on core 2 00:15:27.744 Starting thread on core 3 00:15:27.745 Starting thread on core 1 00:15:27.745 01:01:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:27.745 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.745 [2024-07-14 01:01:16.928360] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:31.031 [2024-07-14 01:01:19.994667] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:31.031 Initializing NVMe Controllers 00:15:31.031 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:31.031 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:31.031 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:31.031 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:31.031 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:31.031 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:31.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:31.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:31.031 Initialization complete. Launching workers. 00:15:31.031 Starting thread on core 1 with urgent priority queue 00:15:31.031 Starting thread on core 2 with urgent priority queue 00:15:31.031 Starting thread on core 3 with urgent priority queue 00:15:31.031 Starting thread on core 0 with urgent priority queue 00:15:31.031 SPDK bdev Controller (SPDK1 ) core 0: 4631.00 IO/s 21.59 secs/100000 ios 00:15:31.031 SPDK bdev Controller (SPDK1 ) core 1: 4984.00 IO/s 20.06 secs/100000 ios 00:15:31.031 SPDK bdev Controller (SPDK1 ) core 2: 5343.33 IO/s 18.71 secs/100000 ios 00:15:31.031 SPDK bdev Controller (SPDK1 ) core 3: 5309.67 IO/s 18.83 secs/100000 ios 00:15:31.031 ======================================================== 00:15:31.031 00:15:31.031 01:01:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:31.031 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.031 [2024-07-14 01:01:20.298425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:31.031 Initializing NVMe Controllers 00:15:31.031 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:31.031 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:31.031 Namespace ID: 1 size: 0GB 00:15:31.031 Initialization complete. 00:15:31.031 INFO: using host memory buffer for IO 00:15:31.031 Hello world! 00:15:31.031 [2024-07-14 01:01:20.331991] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:31.031 01:01:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:31.031 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.290 [2024-07-14 01:01:20.613412] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:32.225 Initializing NVMe Controllers 00:15:32.225 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:32.225 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:32.225 Initialization complete. Launching workers. 00:15:32.225 submit (in ns) avg, min, max = 8969.6, 3491.1, 4017960.0 00:15:32.225 complete (in ns) avg, min, max = 24454.7, 2060.0, 4015857.8 00:15:32.225 00:15:32.225 Submit histogram 00:15:32.225 ================ 00:15:32.225 Range in us Cumulative Count 00:15:32.225 3.484 - 3.508: 0.0296% ( 4) 00:15:32.225 3.508 - 3.532: 0.2295% ( 27) 00:15:32.225 3.532 - 3.556: 0.8588% ( 85) 00:15:32.225 3.556 - 3.579: 2.7985% ( 262) 00:15:32.225 3.579 - 3.603: 7.1370% ( 586) 00:15:32.225 3.603 - 3.627: 14.0520% ( 934) 00:15:32.225 3.627 - 3.650: 23.8617% ( 1325) 00:15:32.225 3.650 - 3.674: 34.5154% ( 1439) 00:15:32.225 3.674 - 3.698: 43.4960% ( 1213) 00:15:32.225 3.698 - 3.721: 51.6991% ( 1108) 00:15:32.225 3.721 - 3.745: 57.3480% ( 763) 00:15:32.225 3.745 - 3.769: 62.3825% ( 680) 00:15:32.225 3.769 - 3.793: 66.4322% ( 547) 00:15:32.225 3.793 - 3.816: 69.9415% ( 474) 00:15:32.225 3.816 - 3.840: 72.9029% ( 400) 00:15:32.225 3.840 - 3.864: 76.1605% ( 440) 00:15:32.225 3.864 - 3.887: 79.6476% ( 471) 00:15:32.225 3.887 - 3.911: 82.8015% ( 426) 00:15:32.225 3.911 - 3.935: 85.4890% ( 363) 00:15:32.225 3.935 - 3.959: 87.6212% ( 288) 00:15:32.225 3.959 - 3.982: 89.3241% ( 230) 00:15:32.225 3.982 - 4.006: 90.7011% ( 186) 00:15:32.225 4.006 - 4.030: 91.9523% ( 169) 00:15:32.225 4.030 - 4.053: 93.0703% ( 151) 00:15:32.225 4.053 - 4.077: 93.8772% ( 109) 00:15:32.225 4.077 - 4.101: 94.4917% ( 83) 00:15:32.225 4.101 - 4.124: 95.0544% ( 76) 00:15:32.225 4.124 - 4.148: 95.5949% ( 73) 00:15:32.225 4.148 - 4.172: 95.8688% ( 37) 00:15:32.225 4.172 - 4.196: 96.0761% ( 28) 00:15:32.225 4.196 - 4.219: 96.2316% ( 21) 00:15:32.225 4.219 - 4.243: 96.3574% ( 17) 00:15:32.225 4.243 - 4.267: 96.4907% ( 18) 00:15:32.225 4.267 - 4.290: 96.5573% ( 9) 00:15:32.225 4.290 - 4.314: 96.6758% ( 16) 00:15:32.225 4.314 - 4.338: 96.7794% ( 14) 00:15:32.225 4.338 - 4.361: 96.8165% ( 5) 00:15:32.225 4.361 - 4.385: 96.9349% ( 16) 00:15:32.225 4.385 - 4.409: 96.9645% ( 4) 00:15:32.225 4.409 - 4.433: 97.0164% ( 7) 00:15:32.225 4.433 - 4.456: 97.0682% ( 7) 00:15:32.225 4.456 - 4.480: 97.0830% ( 2) 00:15:32.225 4.480 - 4.504: 97.0978% ( 2) 00:15:32.225 4.504 - 4.527: 97.1126% ( 2) 00:15:32.225 4.527 - 4.551: 97.1200% ( 1) 00:15:32.225 4.551 - 4.575: 97.1348% ( 2) 00:15:32.225 4.575 - 4.599: 97.1496% ( 2) 00:15:32.225 4.599 - 4.622: 97.1644% ( 2) 00:15:32.225 4.622 - 4.646: 97.1940% ( 4) 00:15:32.225 4.646 - 4.670: 97.2163% ( 3) 00:15:32.225 4.670 - 4.693: 97.2385% ( 3) 00:15:32.225 4.693 - 4.717: 97.2459% ( 1) 00:15:32.225 4.717 - 4.741: 97.2755% ( 4) 00:15:32.225 4.741 - 4.764: 97.3051% ( 4) 00:15:32.225 4.764 - 4.788: 97.3495% ( 6) 00:15:32.225 4.788 - 4.812: 97.3717% ( 3) 00:15:32.225 4.812 - 4.836: 97.4162% ( 6) 00:15:32.225 4.836 - 4.859: 97.4532% ( 5) 00:15:32.225 4.859 - 4.883: 97.5050% ( 7) 00:15:32.225 4.883 - 4.907: 97.5642% ( 8) 00:15:32.225 4.907 - 4.930: 97.6161% ( 7) 00:15:32.225 4.930 - 4.954: 97.6679% ( 7) 00:15:32.225 4.954 - 4.978: 97.7197% ( 7) 00:15:32.225 4.978 - 5.001: 97.7567% ( 5) 00:15:32.225 5.001 - 5.025: 97.7937% ( 5) 00:15:32.225 5.025 - 5.049: 97.8234% ( 4) 00:15:32.225 5.049 - 5.073: 97.8530% ( 4) 00:15:32.225 5.073 - 5.096: 97.8604% ( 1) 00:15:32.225 5.096 - 5.120: 97.8752% ( 2) 00:15:32.225 5.120 - 5.144: 97.8900% ( 2) 00:15:32.225 5.144 - 5.167: 97.9344% ( 6) 00:15:32.225 5.167 - 5.191: 97.9714% ( 5) 00:15:32.225 5.191 - 5.215: 97.9788% ( 1) 00:15:32.225 5.215 - 5.239: 98.0010% ( 3) 00:15:32.225 5.262 - 5.286: 98.0158% ( 2) 00:15:32.225 5.286 - 5.310: 98.0232% ( 1) 00:15:32.225 5.310 - 5.333: 98.0307% ( 1) 00:15:32.225 5.333 - 5.357: 98.0381% ( 1) 00:15:32.225 5.357 - 5.381: 98.0529% ( 2) 00:15:32.225 5.381 - 5.404: 98.0751% ( 3) 00:15:32.225 5.404 - 5.428: 98.0973% ( 3) 00:15:32.225 5.428 - 5.452: 98.1269% ( 4) 00:15:32.225 5.452 - 5.476: 98.1491% ( 3) 00:15:32.225 5.476 - 5.499: 98.1565% ( 1) 00:15:32.225 5.499 - 5.523: 98.1935% ( 5) 00:15:32.225 5.523 - 5.547: 98.2157% ( 3) 00:15:32.225 5.547 - 5.570: 98.2305% ( 2) 00:15:32.225 5.570 - 5.594: 98.2528% ( 3) 00:15:32.225 5.594 - 5.618: 98.2824% ( 4) 00:15:32.225 5.618 - 5.641: 98.2898% ( 1) 00:15:32.225 5.665 - 5.689: 98.3120% ( 3) 00:15:32.225 5.689 - 5.713: 98.3342% ( 3) 00:15:32.225 5.713 - 5.736: 98.3416% ( 1) 00:15:32.225 5.760 - 5.784: 98.3564% ( 2) 00:15:32.225 5.807 - 5.831: 98.3638% ( 1) 00:15:32.225 5.831 - 5.855: 98.3712% ( 1) 00:15:32.225 5.926 - 5.950: 98.3786% ( 1) 00:15:32.225 5.950 - 5.973: 98.3934% ( 2) 00:15:32.225 6.021 - 6.044: 98.4008% ( 1) 00:15:32.225 6.068 - 6.116: 98.4156% ( 2) 00:15:32.225 6.163 - 6.210: 98.4230% ( 1) 00:15:32.225 6.305 - 6.353: 98.4453% ( 3) 00:15:32.225 6.447 - 6.495: 98.4527% ( 1) 00:15:32.225 6.542 - 6.590: 98.4601% ( 1) 00:15:32.225 6.732 - 6.779: 98.4675% ( 1) 00:15:32.225 7.111 - 7.159: 98.4749% ( 1) 00:15:32.225 7.206 - 7.253: 98.4897% ( 2) 00:15:32.225 7.443 - 7.490: 98.4971% ( 1) 00:15:32.225 7.490 - 7.538: 98.5045% ( 1) 00:15:32.225 7.538 - 7.585: 98.5119% ( 1) 00:15:32.225 7.585 - 7.633: 98.5193% ( 1) 00:15:32.225 7.680 - 7.727: 98.5267% ( 1) 00:15:32.225 7.727 - 7.775: 98.5341% ( 1) 00:15:32.225 7.822 - 7.870: 98.5415% ( 1) 00:15:32.225 7.917 - 7.964: 98.5563% ( 2) 00:15:32.225 7.964 - 8.012: 98.5711% ( 2) 00:15:32.225 8.012 - 8.059: 98.5933% ( 3) 00:15:32.225 8.059 - 8.107: 98.6007% ( 1) 00:15:32.225 8.154 - 8.201: 98.6081% ( 1) 00:15:32.225 8.201 - 8.249: 98.6155% ( 1) 00:15:32.225 8.249 - 8.296: 98.6303% ( 2) 00:15:32.225 8.344 - 8.391: 98.6451% ( 2) 00:15:32.225 8.391 - 8.439: 98.6600% ( 2) 00:15:32.225 8.581 - 8.628: 98.6822% ( 3) 00:15:32.225 8.628 - 8.676: 98.6896% ( 1) 00:15:32.225 8.676 - 8.723: 98.6970% ( 1) 00:15:32.225 8.723 - 8.770: 98.7044% ( 1) 00:15:32.225 8.770 - 8.818: 98.7118% ( 1) 00:15:32.225 8.818 - 8.865: 98.7192% ( 1) 00:15:32.225 9.007 - 9.055: 98.7266% ( 1) 00:15:32.225 9.055 - 9.102: 98.7340% ( 1) 00:15:32.225 9.102 - 9.150: 98.7562% ( 3) 00:15:32.225 9.150 - 9.197: 98.7636% ( 1) 00:15:32.225 9.244 - 9.292: 98.7710% ( 1) 00:15:32.225 9.292 - 9.339: 98.7784% ( 1) 00:15:32.225 9.339 - 9.387: 98.7858% ( 1) 00:15:32.225 9.387 - 9.434: 98.8006% ( 2) 00:15:32.225 9.529 - 9.576: 98.8080% ( 1) 00:15:32.225 9.624 - 9.671: 98.8228% ( 2) 00:15:32.225 9.908 - 9.956: 98.8302% ( 1) 00:15:32.226 9.956 - 10.003: 98.8376% ( 1) 00:15:32.226 10.003 - 10.050: 98.8450% ( 1) 00:15:32.226 10.145 - 10.193: 98.8599% ( 2) 00:15:32.226 10.382 - 10.430: 98.8673% ( 1) 00:15:32.226 10.430 - 10.477: 98.8747% ( 1) 00:15:32.226 10.524 - 10.572: 98.8895% ( 2) 00:15:32.226 10.667 - 10.714: 98.8969% ( 1) 00:15:32.226 10.714 - 10.761: 98.9043% ( 1) 00:15:32.226 10.856 - 10.904: 98.9117% ( 1) 00:15:32.226 11.046 - 11.093: 98.9191% ( 1) 00:15:32.226 11.093 - 11.141: 98.9265% ( 1) 00:15:32.226 11.330 - 11.378: 98.9339% ( 1) 00:15:32.226 11.567 - 11.615: 98.9413% ( 1) 00:15:32.226 11.852 - 11.899: 98.9487% ( 1) 00:15:32.226 12.231 - 12.326: 98.9561% ( 1) 00:15:32.226 12.326 - 12.421: 98.9635% ( 1) 00:15:32.226 12.421 - 12.516: 98.9709% ( 1) 00:15:32.226 12.610 - 12.705: 98.9783% ( 1) 00:15:32.226 12.705 - 12.800: 98.9931% ( 2) 00:15:32.226 13.179 - 13.274: 99.0005% ( 1) 00:15:32.226 13.748 - 13.843: 99.0227% ( 3) 00:15:32.226 14.127 - 14.222: 99.0449% ( 3) 00:15:32.226 14.412 - 14.507: 99.0523% ( 1) 00:15:32.226 14.696 - 14.791: 99.0597% ( 1) 00:15:32.226 14.981 - 15.076: 99.0672% ( 1) 00:15:32.226 17.067 - 17.161: 99.0746% ( 1) 00:15:32.226 17.161 - 17.256: 99.0820% ( 1) 00:15:32.226 17.256 - 17.351: 99.0894% ( 1) 00:15:32.226 17.446 - 17.541: 99.1116% ( 3) 00:15:32.226 17.541 - 17.636: 99.1338% ( 3) 00:15:32.226 17.636 - 17.730: 99.1782% ( 6) 00:15:32.226 17.730 - 17.825: 99.2596% ( 11) 00:15:32.226 17.825 - 17.920: 99.3115% ( 7) 00:15:32.226 17.920 - 18.015: 99.3633% ( 7) 00:15:32.226 18.015 - 18.110: 99.3781% ( 2) 00:15:32.226 18.110 - 18.204: 99.4299% ( 7) 00:15:32.226 18.204 - 18.299: 99.4447% ( 2) 00:15:32.226 18.299 - 18.394: 99.5114% ( 9) 00:15:32.226 18.394 - 18.489: 99.5632% ( 7) 00:15:32.226 18.489 - 18.584: 99.5854% ( 3) 00:15:32.226 18.584 - 18.679: 99.6224% ( 5) 00:15:32.226 18.679 - 18.773: 99.6446% ( 3) 00:15:32.226 18.773 - 18.868: 99.6816% ( 5) 00:15:32.226 18.868 - 18.963: 99.7187% ( 5) 00:15:32.226 18.963 - 19.058: 99.7335% ( 2) 00:15:32.226 19.058 - 19.153: 99.7483% ( 2) 00:15:32.226 19.153 - 19.247: 99.7631% ( 2) 00:15:32.226 19.627 - 19.721: 99.7705% ( 1) 00:15:32.226 19.816 - 19.911: 99.7779% ( 1) 00:15:32.226 19.911 - 20.006: 99.7853% ( 1) 00:15:32.226 20.101 - 20.196: 99.7927% ( 1) 00:15:32.226 20.859 - 20.954: 99.8001% ( 1) 00:15:32.226 21.049 - 21.144: 99.8075% ( 1) 00:15:32.226 21.713 - 21.807: 99.8149% ( 1) 00:15:32.226 21.807 - 21.902: 99.8223% ( 1) 00:15:32.226 22.376 - 22.471: 99.8297% ( 1) 00:15:32.226 22.471 - 22.566: 99.8371% ( 1) 00:15:32.226 22.756 - 22.850: 99.8445% ( 1) 00:15:32.226 23.988 - 24.083: 99.8519% ( 1) 00:15:32.226 24.462 - 24.652: 99.8593% ( 1) 00:15:32.226 26.169 - 26.359: 99.8667% ( 1) 00:15:32.226 26.548 - 26.738: 99.8741% ( 1) 00:15:32.226 3980.705 - 4004.978: 99.9556% ( 11) 00:15:32.226 4004.978 - 4029.250: 100.0000% ( 6) 00:15:32.226 00:15:32.226 Complete histogram 00:15:32.226 ================== 00:15:32.226 Range in us Cumulative Count 00:15:32.226 2.050 - 2.062: 0.0666% ( 9) 00:15:32.226 2.062 - 2.074: 13.8595% ( 1863) 00:15:32.226 2.074 - 2.086: 36.7661% ( 3094) 00:15:32.226 2.086 - 2.098: 39.4758% ( 366) 00:15:32.226 2.098 - 2.110: 53.2020% ( 1854) 00:15:32.226 2.110 - 2.121: 61.9012% ( 1175) 00:15:32.226 2.121 - 2.133: 64.4407% ( 343) 00:15:32.226 2.133 - 2.145: 73.7618% ( 1259) 00:15:32.226 2.145 - 2.157: 79.6106% ( 790) 00:15:32.226 2.157 - 2.169: 81.1801% ( 212) 00:15:32.226 2.169 - 2.181: 85.8592% ( 632) 00:15:32.226 2.181 - 2.193: 88.3468% ( 336) 00:15:32.226 2.193 - 2.204: 89.2204% ( 118) 00:15:32.226 2.204 - 2.216: 90.5975% ( 186) 00:15:32.226 2.216 - 2.228: 92.3447% ( 236) 00:15:32.226 2.228 - 2.240: 93.6477% ( 176) 00:15:32.226 2.240 - 2.252: 94.2993% ( 88) 00:15:32.226 2.252 - 2.264: 94.6842% ( 52) 00:15:32.226 2.264 - 2.276: 94.8619% ( 24) 00:15:32.226 2.276 - 2.287: 95.0544% ( 26) 00:15:32.226 2.287 - 2.299: 95.3135% ( 35) 00:15:32.226 2.299 - 2.311: 95.4542% ( 19) 00:15:32.226 2.311 - 2.323: 95.6393% ( 25) 00:15:32.226 2.323 - 2.335: 95.6763% ( 5) 00:15:32.226 2.335 - 2.347: 95.7207% ( 6) 00:15:32.226 2.347 - 2.359: 95.7652% ( 6) 00:15:32.226 2.359 - 2.370: 95.9799% ( 29) 00:15:32.226 2.370 - 2.382: 96.2168% ( 32) 00:15:32.226 2.382 - 2.394: 96.5573% ( 46) 00:15:32.226 2.394 - 2.406: 96.9275% ( 50) 00:15:32.226 2.406 - 2.418: 97.1274% ( 27) 00:15:32.226 2.418 - 2.430: 97.2681% ( 19) 00:15:32.226 2.430 - 2.441: 97.3939% ( 17) 00:15:32.226 2.441 - 2.453: 97.5272% ( 18) 00:15:32.226 2.453 - 2.465: 97.6531% ( 17) 00:15:32.226 2.465 - 2.477: 97.7419% ( 12) 00:15:32.226 2.477 - 2.489: 97.8382% ( 13) 00:15:32.226 2.489 - 2.501: 97.8530% ( 2) 00:15:32.226 2.501 - 2.513: 97.8974% ( 6) 00:15:32.226 2.513 - 2.524: 97.9344% ( 5) 00:15:32.226 2.524 - 2.536: 97.9418% ( 1) 00:15:32.226 2.536 - 2.548: 97.9862% ( 6) 00:15:32.226 2.548 - 2.560: 98.0084% ( 3) 00:15:32.226 2.560 - 2.572: 98.0307% ( 3) 00:15:32.226 2.572 - 2.584: 98.0455% ( 2) 00:15:32.226 2.584 - 2.596: 98.0603% ( 2) 00:15:32.226 2.596 - 2.607: 98.0825% ( 3) 00:15:32.226 2.607 - 2.619: 98.0973% ( 2) 00:15:32.226 2.619 - 2.631: 98.1047% ( 1) 00:15:32.226 2.631 - 2.643: 98.1269% ( 3) 00:15:32.226 2.643 - 2.655: 98.1343% ( 1) 00:15:32.226 2.679 - 2.690: 98.1491% ( 2) 00:15:32.226 2.690 - 2.702: 98.1639% ( 2) 00:15:32.226 2.726 - 2.738: 98.1713% ( 1) 00:15:32.226 2.738 - 2.750: 98.1861% ( 2) 00:15:32.226 2.773 - 2.785: 98.1935% ( 1) 00:15:32.226 2.785 - 2.797: 98.2009% ( 1) 00:15:32.226 2.809 - 2.821: 98.2083% ( 1) 00:15:32.226 2.821 - 2.833: 98.2157% ( 1) 00:15:32.226 2.833 - 2.844: 98.2231% ( 1) 00:15:32.226 2.856 - 2.868: 98.2305% ( 1) 00:15:32.226 2.868 - 2.880: 98.2380% ( 1) 00:15:32.226 2.904 - 2.916: 98.2528% ( 2) 00:15:32.226 2.939 - 2.951: 98.2602% ( 1) 00:15:32.226 2.999 - 3.010: 98.2750% ( 2) 00:15:32.226 3.010 - 3.022: 98.2824% ( 1) 00:15:32.226 3.034 - 3.058: 98.3268% ( 6) 00:15:32.226 3.058 - 3.081: 98.3342% ( 1) 00:15:32.226 3.105 - 3.129: 98.3490% ( 2) 00:15:32.226 3.129 - 3.153: 98.3638% ( 2) 00:15:32.226 3.153 - 3.176: 98.3786% ( 2) 00:15:32.226 3.200 - 3.224: 98.3860% ( 1) 00:15:32.226 3.247 - 3.271: 98.4082% ( 3) 00:15:32.226 3.271 - 3.295: 98.4230% ( 2) 00:15:32.226 3.295 - 3.319: 98.4304% ( 1) 00:15:32.226 3.342 - 3.366: 98.4527% ( 3) 00:15:32.226 3.366 - 3.390: 98.4601% ( 1) 00:15:32.226 3.390 - 3.413: 98.4749% ( 2) 00:15:32.226 3.413 - 3.437: 98.4897% ( 2) 00:15:32.226 3.437 - 3.461: 98.4971% ( 1) 00:15:32.226 3.461 - 3.484: 98.5045% ( 1) 00:15:32.226 3.484 - 3.508: 98.5119% ( 1) 00:15:32.226 3.508 - 3.532: 98.5267% ( 2) 00:15:32.226 3.532 - 3.556: 98.5415% ( 2) 00:15:32.226 3.603 - 3.627: 98.5489% ( 1) 00:15:32.226 3.627 - 3.650: 98.5785% ( 4) 00:15:32.226 3.650 - 3.674: 98.6007% ( 3) 00:15:32.226 3.698 - 3.721: 98.6081% ( 1) 00:15:32.226 3.721 - 3.745: 98.6229% ( 2) 00:15:32.226 3.745 - 3.769: 98.6303% ( 1) 00:15:32.226 3.769 - 3.793: 98.6377% ( 1) 00:15:32.226 3.793 - 3.816: 98.6526% ( 2) 00:15:32.226 3.816 - 3.840: 98.6600% ( 1) 00:15:32.226 3.840 - 3.864: 98.6748% ( 2) 00:15:32.226 3.887 - 3.911: 98.6822% ( 1) 00:15:32.226 3.982 - 4.006: 98.6896% ( 1) 00:15:32.226 4.338 - 4.361: 98.6970% ( 1) 00:15:32.226 4.741 - 4.764: 98.7044% ( 1) 00:15:32.226 5.096 - 5.120: 98.7118% ( 1) 00:15:32.226 5.902 - 5.926: 98.7192% ( 1) 00:15:32.226 5.950 - 5.973: 98.7266% ( 1) 00:15:32.226 6.353 - 6.400: 98.7340% ( 1) 00:15:32.226 6.684 - 6.732: 98.7414% ( 1) 00:15:32.226 6.874 - 6.921: 98.7488% ( 1) 00:15:32.226 7.016 - 7.064: 98.7562% ( 1) 00:15:32.226 7.064 - 7.111: 98.7636% ( 1) 00:15:32.226 7.159 - 7.206: 98.7710% ( 1) 00:15:32.226 7.585 - 7.633: 98.7858% ( 2) 00:15:32.226 8.012 - 8.059: 98.7932% ( 1) 00:15:32.226 8.249 - 8.296: 98.8006% ( 1) 00:15:32.226 8.344 - 8.391: 98.8080% ( 1) 00:15:32.226 8.676 - 8.723: 98.8154% ( 1) 00:15:32.226 8.960 - 9.007: 98.8228% ( 1) 00:15:32.226 14.886 - 14.981: 98.8302% ( 1) 00:15:32.226 15.455 - 15.550: 98.8450% ( 2) 00:15:32.226 15.550 - 15.644: 98.8599% ( 2) 00:15:32.226 15.644 - 15.739: 98.8821% ( 3) 00:15:32.226 15.739 - 15.834: 98.9043% ( 3) 00:15:32.226 15.834 - 15.929: 98.9265% ( 3) 00:15:32.226 15.929 - 16.024: 98.9487% ( 3) 00:15:32.226 16.024 - 16.119: 98.9709% ( 3) 00:15:32.226 16.119 - 16.213: 99.0005% ( 4) 00:15:32.226 16.213 - 16.308: 99.0153% ( 2) 00:15:32.226 16.308 - 16.403: 99.0301% ( 2) 00:15:32.226 16.403 - 16.498: 99.0523% ( 3) 00:15:32.226 16.498 - 16.593: 99.1042% ( 7) 00:15:32.226 16.593 - 16.687: 99.1264% ( 3) 00:15:32.226 16.687 - 16.782: 99.1708% ( 6) 00:15:32.227 16.782 - 16.877: 99.2226% ( 7) 00:15:32.227 16.972 - 17.067: 99.2448% ( 3) 00:15:32.227 17.067 - 17.161: 99.2522% ( 1) 00:15:32.227 17.161 - 17.256: 99.2745% ( 3) 00:15:32.227 17.256 - 17.351: 99.2819% ( 1) 00:15:32.227 17.351 - 17.446: 99.2893% ( 1) 00:15:32.227 17.446 - 17.541: 99.3115%[2024-07-14 01:01:21.634822] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:32.485 ( 3) 00:15:32.485 17.541 - 17.636: 99.3189% ( 1) 00:15:32.485 17.636 - 17.730: 99.3485% ( 4) 00:15:32.485 17.730 - 17.825: 99.3633% ( 2) 00:15:32.485 17.825 - 17.920: 99.3707% ( 1) 00:15:32.485 17.920 - 18.015: 99.3855% ( 2) 00:15:32.485 18.299 - 18.394: 99.3929% ( 1) 00:15:32.485 18.394 - 18.489: 99.4077% ( 2) 00:15:32.485 19.153 - 19.247: 99.4151% ( 1) 00:15:32.485 22.281 - 22.376: 99.4225% ( 1) 00:15:32.485 24.462 - 24.652: 99.4299% ( 1) 00:15:32.485 25.410 - 25.600: 99.4373% ( 1) 00:15:32.485 25.600 - 25.790: 99.4447% ( 1) 00:15:32.485 3956.433 - 3980.705: 99.4521% ( 1) 00:15:32.485 3980.705 - 4004.978: 99.7853% ( 45) 00:15:32.485 4004.978 - 4029.250: 100.0000% ( 29) 00:15:32.485 00:15:32.485 01:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:32.485 01:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:32.485 01:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:32.485 01:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:32.485 01:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:32.743 [ 00:15:32.743 { 00:15:32.743 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:32.743 "subtype": "Discovery", 00:15:32.743 "listen_addresses": [], 00:15:32.743 "allow_any_host": true, 00:15:32.743 "hosts": [] 00:15:32.743 }, 00:15:32.743 { 00:15:32.743 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:32.743 "subtype": "NVMe", 00:15:32.743 "listen_addresses": [ 00:15:32.743 { 00:15:32.743 "trtype": "VFIOUSER", 00:15:32.743 "adrfam": "IPv4", 00:15:32.743 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:32.743 "trsvcid": "0" 00:15:32.743 } 00:15:32.743 ], 00:15:32.743 "allow_any_host": true, 00:15:32.743 "hosts": [], 00:15:32.743 "serial_number": "SPDK1", 00:15:32.743 "model_number": "SPDK bdev Controller", 00:15:32.743 "max_namespaces": 32, 00:15:32.743 "min_cntlid": 1, 00:15:32.743 "max_cntlid": 65519, 00:15:32.743 "namespaces": [ 00:15:32.743 { 00:15:32.743 "nsid": 1, 00:15:32.743 "bdev_name": "Malloc1", 00:15:32.743 "name": "Malloc1", 00:15:32.743 "nguid": "DFB3C858D8644D4B8F1BC0EE3549A923", 00:15:32.743 "uuid": "dfb3c858-d864-4d4b-8f1b-c0ee3549a923" 00:15:32.743 } 00:15:32.743 ] 00:15:32.743 }, 00:15:32.743 { 00:15:32.743 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:32.743 "subtype": "NVMe", 00:15:32.743 "listen_addresses": [ 00:15:32.743 { 00:15:32.743 "trtype": "VFIOUSER", 00:15:32.743 "adrfam": "IPv4", 00:15:32.743 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:32.743 "trsvcid": "0" 00:15:32.743 } 00:15:32.743 ], 00:15:32.743 "allow_any_host": true, 00:15:32.743 "hosts": [], 00:15:32.743 "serial_number": "SPDK2", 00:15:32.743 "model_number": "SPDK bdev Controller", 00:15:32.743 "max_namespaces": 32, 00:15:32.743 "min_cntlid": 1, 00:15:32.743 "max_cntlid": 65519, 00:15:32.743 "namespaces": [ 00:15:32.743 { 00:15:32.743 "nsid": 1, 00:15:32.743 "bdev_name": "Malloc2", 00:15:32.743 "name": "Malloc2", 00:15:32.743 "nguid": "AED1A78E75FD45D387556B291D156C56", 00:15:32.743 "uuid": "aed1a78e-75fd-45d3-8755-6b291d156c56" 00:15:32.743 } 00:15:32.743 ] 00:15:32.743 } 00:15:32.743 ] 00:15:32.743 01:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:32.743 01:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1106876 00:15:32.743 01:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:32.743 01:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:32.743 01:01:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:32.743 01:01:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:32.743 01:01:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:32.743 01:01:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:32.743 01:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:32.743 01:01:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:32.743 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.743 [2024-07-14 01:01:22.099318] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:33.002 Malloc3 00:15:33.002 01:01:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:33.260 [2024-07-14 01:01:22.451985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:33.260 01:01:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:33.260 Asynchronous Event Request test 00:15:33.260 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:33.260 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:33.260 Registering asynchronous event callbacks... 00:15:33.260 Starting namespace attribute notice tests for all controllers... 00:15:33.260 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:33.260 aer_cb - Changed Namespace 00:15:33.260 Cleaning up... 00:15:33.520 [ 00:15:33.520 { 00:15:33.520 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:33.520 "subtype": "Discovery", 00:15:33.520 "listen_addresses": [], 00:15:33.520 "allow_any_host": true, 00:15:33.520 "hosts": [] 00:15:33.520 }, 00:15:33.520 { 00:15:33.520 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:33.520 "subtype": "NVMe", 00:15:33.520 "listen_addresses": [ 00:15:33.520 { 00:15:33.520 "trtype": "VFIOUSER", 00:15:33.520 "adrfam": "IPv4", 00:15:33.520 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:33.520 "trsvcid": "0" 00:15:33.520 } 00:15:33.520 ], 00:15:33.520 "allow_any_host": true, 00:15:33.520 "hosts": [], 00:15:33.520 "serial_number": "SPDK1", 00:15:33.520 "model_number": "SPDK bdev Controller", 00:15:33.520 "max_namespaces": 32, 00:15:33.520 "min_cntlid": 1, 00:15:33.520 "max_cntlid": 65519, 00:15:33.520 "namespaces": [ 00:15:33.520 { 00:15:33.520 "nsid": 1, 00:15:33.520 "bdev_name": "Malloc1", 00:15:33.520 "name": "Malloc1", 00:15:33.520 "nguid": "DFB3C858D8644D4B8F1BC0EE3549A923", 00:15:33.520 "uuid": "dfb3c858-d864-4d4b-8f1b-c0ee3549a923" 00:15:33.520 }, 00:15:33.520 { 00:15:33.520 "nsid": 2, 00:15:33.520 "bdev_name": "Malloc3", 00:15:33.520 "name": "Malloc3", 00:15:33.520 "nguid": "F6E958F720D6490EA95C9ABE70054157", 00:15:33.520 "uuid": "f6e958f7-20d6-490e-a95c-9abe70054157" 00:15:33.520 } 00:15:33.520 ] 00:15:33.520 }, 00:15:33.520 { 00:15:33.520 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:33.520 "subtype": "NVMe", 00:15:33.520 "listen_addresses": [ 00:15:33.520 { 00:15:33.520 "trtype": "VFIOUSER", 00:15:33.520 "adrfam": "IPv4", 00:15:33.520 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:33.520 "trsvcid": "0" 00:15:33.520 } 00:15:33.520 ], 00:15:33.520 "allow_any_host": true, 00:15:33.520 "hosts": [], 00:15:33.520 "serial_number": "SPDK2", 00:15:33.520 "model_number": "SPDK bdev Controller", 00:15:33.520 "max_namespaces": 32, 00:15:33.520 "min_cntlid": 1, 00:15:33.520 "max_cntlid": 65519, 00:15:33.520 "namespaces": [ 00:15:33.520 { 00:15:33.520 "nsid": 1, 00:15:33.520 "bdev_name": "Malloc2", 00:15:33.520 "name": "Malloc2", 00:15:33.520 "nguid": "AED1A78E75FD45D387556B291D156C56", 00:15:33.520 "uuid": "aed1a78e-75fd-45d3-8755-6b291d156c56" 00:15:33.520 } 00:15:33.520 ] 00:15:33.520 } 00:15:33.520 ] 00:15:33.520 01:01:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1106876 00:15:33.520 01:01:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:33.520 01:01:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:33.520 01:01:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:33.520 01:01:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:33.520 [2024-07-14 01:01:22.730818] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:33.520 [2024-07-14 01:01:22.730863] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1106894 ] 00:15:33.520 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.520 [2024-07-14 01:01:22.766004] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:33.520 [2024-07-14 01:01:22.768372] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:33.520 [2024-07-14 01:01:22.768401] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f096e283000 00:15:33.520 [2024-07-14 01:01:22.769375] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:33.520 [2024-07-14 01:01:22.770380] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:33.520 [2024-07-14 01:01:22.771386] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:33.520 [2024-07-14 01:01:22.772389] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:33.520 [2024-07-14 01:01:22.773399] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:33.520 [2024-07-14 01:01:22.774399] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:33.520 [2024-07-14 01:01:22.775408] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:33.520 [2024-07-14 01:01:22.776420] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:33.520 [2024-07-14 01:01:22.777433] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:33.520 [2024-07-14 01:01:22.777455] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f096d037000 00:15:33.520 [2024-07-14 01:01:22.778597] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:33.520 [2024-07-14 01:01:22.792524] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:33.520 [2024-07-14 01:01:22.792559] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:33.520 [2024-07-14 01:01:22.797684] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:33.520 [2024-07-14 01:01:22.797740] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:33.520 [2024-07-14 01:01:22.797830] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:33.520 [2024-07-14 01:01:22.797876] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:33.520 [2024-07-14 01:01:22.797895] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:33.520 [2024-07-14 01:01:22.798688] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:33.520 [2024-07-14 01:01:22.798708] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:33.520 [2024-07-14 01:01:22.798720] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:33.520 [2024-07-14 01:01:22.799691] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:33.520 [2024-07-14 01:01:22.799710] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:33.520 [2024-07-14 01:01:22.799723] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:33.520 [2024-07-14 01:01:22.800699] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:33.520 [2024-07-14 01:01:22.800720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:33.520 [2024-07-14 01:01:22.801708] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:33.520 [2024-07-14 01:01:22.801728] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:33.520 [2024-07-14 01:01:22.801737] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:33.520 [2024-07-14 01:01:22.801748] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:33.520 [2024-07-14 01:01:22.801859] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:33.520 [2024-07-14 01:01:22.801874] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:33.520 [2024-07-14 01:01:22.801883] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:33.521 [2024-07-14 01:01:22.802722] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:33.521 [2024-07-14 01:01:22.803732] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:33.521 [2024-07-14 01:01:22.804748] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:33.521 [2024-07-14 01:01:22.805747] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:33.521 [2024-07-14 01:01:22.805831] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:33.521 [2024-07-14 01:01:22.806760] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:33.521 [2024-07-14 01:01:22.806780] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:33.521 [2024-07-14 01:01:22.806789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.806812] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:33.521 [2024-07-14 01:01:22.806828] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.806872] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:33.521 [2024-07-14 01:01:22.806885] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:33.521 [2024-07-14 01:01:22.806905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:33.521 [2024-07-14 01:01:22.814882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:33.521 [2024-07-14 01:01:22.814906] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:33.521 [2024-07-14 01:01:22.814919] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:33.521 [2024-07-14 01:01:22.814928] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:33.521 [2024-07-14 01:01:22.814936] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:33.521 [2024-07-14 01:01:22.814944] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:33.521 [2024-07-14 01:01:22.814953] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:33.521 [2024-07-14 01:01:22.814961] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.814975] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.814991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:33.521 [2024-07-14 01:01:22.822876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:33.521 [2024-07-14 01:01:22.822904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.521 [2024-07-14 01:01:22.822919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.521 [2024-07-14 01:01:22.822931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.521 [2024-07-14 01:01:22.822942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:33.521 [2024-07-14 01:01:22.822951] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.822966] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.822981] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:33.521 [2024-07-14 01:01:22.830875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:33.521 [2024-07-14 01:01:22.830894] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:33.521 [2024-07-14 01:01:22.830903] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.830919] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.830930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.830944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:33.521 [2024-07-14 01:01:22.838875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:33.521 [2024-07-14 01:01:22.838948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.838963] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.838976] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:33.521 [2024-07-14 01:01:22.838984] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:33.521 [2024-07-14 01:01:22.838994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:33.521 [2024-07-14 01:01:22.846891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:33.521 [2024-07-14 01:01:22.846915] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:33.521 [2024-07-14 01:01:22.846934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.846949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.846961] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:33.521 [2024-07-14 01:01:22.846969] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:33.521 [2024-07-14 01:01:22.846979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:33.521 [2024-07-14 01:01:22.854889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:33.521 [2024-07-14 01:01:22.854917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.854934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.854947] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:33.521 [2024-07-14 01:01:22.854955] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:33.521 [2024-07-14 01:01:22.854964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:33.521 [2024-07-14 01:01:22.862889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:33.521 [2024-07-14 01:01:22.862912] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.862925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.862942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.862954] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.862963] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.862972] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.862981] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:33.521 [2024-07-14 01:01:22.862989] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:33.521 [2024-07-14 01:01:22.862997] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:33.521 [2024-07-14 01:01:22.863023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:33.521 [2024-07-14 01:01:22.870880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:33.521 [2024-07-14 01:01:22.870905] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:33.521 [2024-07-14 01:01:22.878880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:33.521 [2024-07-14 01:01:22.878905] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:33.521 [2024-07-14 01:01:22.886882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:33.521 [2024-07-14 01:01:22.886912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:33.521 [2024-07-14 01:01:22.894895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:33.521 [2024-07-14 01:01:22.894931] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:33.521 [2024-07-14 01:01:22.894943] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:33.521 [2024-07-14 01:01:22.894949] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:33.521 [2024-07-14 01:01:22.894955] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:33.521 [2024-07-14 01:01:22.894964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:33.521 [2024-07-14 01:01:22.894976] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:33.521 [2024-07-14 01:01:22.894983] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:33.521 [2024-07-14 01:01:22.894992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:33.521 [2024-07-14 01:01:22.895003] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:33.521 [2024-07-14 01:01:22.895011] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:33.521 [2024-07-14 01:01:22.895019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:33.521 [2024-07-14 01:01:22.895031] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:33.521 [2024-07-14 01:01:22.895043] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:33.522 [2024-07-14 01:01:22.895052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:33.522 [2024-07-14 01:01:22.902883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:33.522 [2024-07-14 01:01:22.902911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:33.522 [2024-07-14 01:01:22.902928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:33.522 [2024-07-14 01:01:22.902939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:33.522 ===================================================== 00:15:33.522 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:33.522 ===================================================== 00:15:33.522 Controller Capabilities/Features 00:15:33.522 ================================ 00:15:33.522 Vendor ID: 4e58 00:15:33.522 Subsystem Vendor ID: 4e58 00:15:33.522 Serial Number: SPDK2 00:15:33.522 Model Number: SPDK bdev Controller 00:15:33.522 Firmware Version: 24.09 00:15:33.522 Recommended Arb Burst: 6 00:15:33.522 IEEE OUI Identifier: 8d 6b 50 00:15:33.522 Multi-path I/O 00:15:33.522 May have multiple subsystem ports: Yes 00:15:33.522 May have multiple controllers: Yes 00:15:33.522 Associated with SR-IOV VF: No 00:15:33.522 Max Data Transfer Size: 131072 00:15:33.522 Max Number of Namespaces: 32 00:15:33.522 Max Number of I/O Queues: 127 00:15:33.522 NVMe Specification Version (VS): 1.3 00:15:33.522 NVMe Specification Version (Identify): 1.3 00:15:33.522 Maximum Queue Entries: 256 00:15:33.522 Contiguous Queues Required: Yes 00:15:33.522 Arbitration Mechanisms Supported 00:15:33.522 Weighted Round Robin: Not Supported 00:15:33.522 Vendor Specific: Not Supported 00:15:33.522 Reset Timeout: 15000 ms 00:15:33.522 Doorbell Stride: 4 bytes 00:15:33.522 NVM Subsystem Reset: Not Supported 00:15:33.522 Command Sets Supported 00:15:33.522 NVM Command Set: Supported 00:15:33.522 Boot Partition: Not Supported 00:15:33.522 Memory Page Size Minimum: 4096 bytes 00:15:33.522 Memory Page Size Maximum: 4096 bytes 00:15:33.522 Persistent Memory Region: Not Supported 00:15:33.522 Optional Asynchronous Events Supported 00:15:33.522 Namespace Attribute Notices: Supported 00:15:33.522 Firmware Activation Notices: Not Supported 00:15:33.522 ANA Change Notices: Not Supported 00:15:33.522 PLE Aggregate Log Change Notices: Not Supported 00:15:33.522 LBA Status Info Alert Notices: Not Supported 00:15:33.522 EGE Aggregate Log Change Notices: Not Supported 00:15:33.522 Normal NVM Subsystem Shutdown event: Not Supported 00:15:33.522 Zone Descriptor Change Notices: Not Supported 00:15:33.522 Discovery Log Change Notices: Not Supported 00:15:33.522 Controller Attributes 00:15:33.522 128-bit Host Identifier: Supported 00:15:33.522 Non-Operational Permissive Mode: Not Supported 00:15:33.522 NVM Sets: Not Supported 00:15:33.522 Read Recovery Levels: Not Supported 00:15:33.522 Endurance Groups: Not Supported 00:15:33.522 Predictable Latency Mode: Not Supported 00:15:33.522 Traffic Based Keep ALive: Not Supported 00:15:33.522 Namespace Granularity: Not Supported 00:15:33.522 SQ Associations: Not Supported 00:15:33.522 UUID List: Not Supported 00:15:33.522 Multi-Domain Subsystem: Not Supported 00:15:33.522 Fixed Capacity Management: Not Supported 00:15:33.522 Variable Capacity Management: Not Supported 00:15:33.522 Delete Endurance Group: Not Supported 00:15:33.522 Delete NVM Set: Not Supported 00:15:33.522 Extended LBA Formats Supported: Not Supported 00:15:33.522 Flexible Data Placement Supported: Not Supported 00:15:33.522 00:15:33.522 Controller Memory Buffer Support 00:15:33.522 ================================ 00:15:33.522 Supported: No 00:15:33.522 00:15:33.522 Persistent Memory Region Support 00:15:33.522 ================================ 00:15:33.522 Supported: No 00:15:33.522 00:15:33.522 Admin Command Set Attributes 00:15:33.522 ============================ 00:15:33.522 Security Send/Receive: Not Supported 00:15:33.522 Format NVM: Not Supported 00:15:33.522 Firmware Activate/Download: Not Supported 00:15:33.522 Namespace Management: Not Supported 00:15:33.522 Device Self-Test: Not Supported 00:15:33.522 Directives: Not Supported 00:15:33.522 NVMe-MI: Not Supported 00:15:33.522 Virtualization Management: Not Supported 00:15:33.522 Doorbell Buffer Config: Not Supported 00:15:33.522 Get LBA Status Capability: Not Supported 00:15:33.522 Command & Feature Lockdown Capability: Not Supported 00:15:33.522 Abort Command Limit: 4 00:15:33.522 Async Event Request Limit: 4 00:15:33.522 Number of Firmware Slots: N/A 00:15:33.522 Firmware Slot 1 Read-Only: N/A 00:15:33.522 Firmware Activation Without Reset: N/A 00:15:33.522 Multiple Update Detection Support: N/A 00:15:33.522 Firmware Update Granularity: No Information Provided 00:15:33.522 Per-Namespace SMART Log: No 00:15:33.522 Asymmetric Namespace Access Log Page: Not Supported 00:15:33.522 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:33.522 Command Effects Log Page: Supported 00:15:33.522 Get Log Page Extended Data: Supported 00:15:33.522 Telemetry Log Pages: Not Supported 00:15:33.522 Persistent Event Log Pages: Not Supported 00:15:33.522 Supported Log Pages Log Page: May Support 00:15:33.522 Commands Supported & Effects Log Page: Not Supported 00:15:33.522 Feature Identifiers & Effects Log Page:May Support 00:15:33.522 NVMe-MI Commands & Effects Log Page: May Support 00:15:33.522 Data Area 4 for Telemetry Log: Not Supported 00:15:33.522 Error Log Page Entries Supported: 128 00:15:33.522 Keep Alive: Supported 00:15:33.522 Keep Alive Granularity: 10000 ms 00:15:33.522 00:15:33.522 NVM Command Set Attributes 00:15:33.522 ========================== 00:15:33.522 Submission Queue Entry Size 00:15:33.522 Max: 64 00:15:33.522 Min: 64 00:15:33.522 Completion Queue Entry Size 00:15:33.522 Max: 16 00:15:33.522 Min: 16 00:15:33.522 Number of Namespaces: 32 00:15:33.522 Compare Command: Supported 00:15:33.522 Write Uncorrectable Command: Not Supported 00:15:33.522 Dataset Management Command: Supported 00:15:33.522 Write Zeroes Command: Supported 00:15:33.522 Set Features Save Field: Not Supported 00:15:33.522 Reservations: Not Supported 00:15:33.522 Timestamp: Not Supported 00:15:33.522 Copy: Supported 00:15:33.522 Volatile Write Cache: Present 00:15:33.522 Atomic Write Unit (Normal): 1 00:15:33.522 Atomic Write Unit (PFail): 1 00:15:33.522 Atomic Compare & Write Unit: 1 00:15:33.522 Fused Compare & Write: Supported 00:15:33.522 Scatter-Gather List 00:15:33.522 SGL Command Set: Supported (Dword aligned) 00:15:33.522 SGL Keyed: Not Supported 00:15:33.522 SGL Bit Bucket Descriptor: Not Supported 00:15:33.522 SGL Metadata Pointer: Not Supported 00:15:33.522 Oversized SGL: Not Supported 00:15:33.522 SGL Metadata Address: Not Supported 00:15:33.522 SGL Offset: Not Supported 00:15:33.522 Transport SGL Data Block: Not Supported 00:15:33.522 Replay Protected Memory Block: Not Supported 00:15:33.522 00:15:33.522 Firmware Slot Information 00:15:33.522 ========================= 00:15:33.522 Active slot: 1 00:15:33.522 Slot 1 Firmware Revision: 24.09 00:15:33.522 00:15:33.522 00:15:33.522 Commands Supported and Effects 00:15:33.522 ============================== 00:15:33.522 Admin Commands 00:15:33.522 -------------- 00:15:33.522 Get Log Page (02h): Supported 00:15:33.522 Identify (06h): Supported 00:15:33.522 Abort (08h): Supported 00:15:33.522 Set Features (09h): Supported 00:15:33.522 Get Features (0Ah): Supported 00:15:33.522 Asynchronous Event Request (0Ch): Supported 00:15:33.522 Keep Alive (18h): Supported 00:15:33.522 I/O Commands 00:15:33.522 ------------ 00:15:33.522 Flush (00h): Supported LBA-Change 00:15:33.522 Write (01h): Supported LBA-Change 00:15:33.522 Read (02h): Supported 00:15:33.522 Compare (05h): Supported 00:15:33.522 Write Zeroes (08h): Supported LBA-Change 00:15:33.522 Dataset Management (09h): Supported LBA-Change 00:15:33.522 Copy (19h): Supported LBA-Change 00:15:33.522 00:15:33.522 Error Log 00:15:33.522 ========= 00:15:33.522 00:15:33.522 Arbitration 00:15:33.522 =========== 00:15:33.522 Arbitration Burst: 1 00:15:33.522 00:15:33.522 Power Management 00:15:33.522 ================ 00:15:33.522 Number of Power States: 1 00:15:33.522 Current Power State: Power State #0 00:15:33.522 Power State #0: 00:15:33.522 Max Power: 0.00 W 00:15:33.522 Non-Operational State: Operational 00:15:33.522 Entry Latency: Not Reported 00:15:33.522 Exit Latency: Not Reported 00:15:33.522 Relative Read Throughput: 0 00:15:33.522 Relative Read Latency: 0 00:15:33.522 Relative Write Throughput: 0 00:15:33.522 Relative Write Latency: 0 00:15:33.522 Idle Power: Not Reported 00:15:33.522 Active Power: Not Reported 00:15:33.522 Non-Operational Permissive Mode: Not Supported 00:15:33.522 00:15:33.522 Health Information 00:15:33.522 ================== 00:15:33.522 Critical Warnings: 00:15:33.522 Available Spare Space: OK 00:15:33.522 Temperature: OK 00:15:33.522 Device Reliability: OK 00:15:33.522 Read Only: No 00:15:33.522 Volatile Memory Backup: OK 00:15:33.522 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:33.522 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:33.522 Available Spare: 0% 00:15:33.523 Available Sp[2024-07-14 01:01:22.903051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:33.523 [2024-07-14 01:01:22.910889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:33.523 [2024-07-14 01:01:22.910942] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:33.523 [2024-07-14 01:01:22.910961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.523 [2024-07-14 01:01:22.910972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.523 [2024-07-14 01:01:22.910982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.523 [2024-07-14 01:01:22.910992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:33.523 [2024-07-14 01:01:22.911059] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:33.523 [2024-07-14 01:01:22.911079] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:33.523 [2024-07-14 01:01:22.912055] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.523 [2024-07-14 01:01:22.912145] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:33.523 [2024-07-14 01:01:22.912160] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:33.523 [2024-07-14 01:01:22.913065] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:33.523 [2024-07-14 01:01:22.913089] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:33.523 [2024-07-14 01:01:22.913142] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:33.523 [2024-07-14 01:01:22.914369] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:33.781 are Threshold: 0% 00:15:33.781 Life Percentage Used: 0% 00:15:33.781 Data Units Read: 0 00:15:33.781 Data Units Written: 0 00:15:33.781 Host Read Commands: 0 00:15:33.781 Host Write Commands: 0 00:15:33.781 Controller Busy Time: 0 minutes 00:15:33.781 Power Cycles: 0 00:15:33.781 Power On Hours: 0 hours 00:15:33.781 Unsafe Shutdowns: 0 00:15:33.781 Unrecoverable Media Errors: 0 00:15:33.781 Lifetime Error Log Entries: 0 00:15:33.781 Warning Temperature Time: 0 minutes 00:15:33.781 Critical Temperature Time: 0 minutes 00:15:33.781 00:15:33.781 Number of Queues 00:15:33.781 ================ 00:15:33.781 Number of I/O Submission Queues: 127 00:15:33.781 Number of I/O Completion Queues: 127 00:15:33.781 00:15:33.781 Active Namespaces 00:15:33.781 ================= 00:15:33.781 Namespace ID:1 00:15:33.781 Error Recovery Timeout: Unlimited 00:15:33.781 Command Set Identifier: NVM (00h) 00:15:33.781 Deallocate: Supported 00:15:33.781 Deallocated/Unwritten Error: Not Supported 00:15:33.781 Deallocated Read Value: Unknown 00:15:33.781 Deallocate in Write Zeroes: Not Supported 00:15:33.781 Deallocated Guard Field: 0xFFFF 00:15:33.781 Flush: Supported 00:15:33.781 Reservation: Supported 00:15:33.781 Namespace Sharing Capabilities: Multiple Controllers 00:15:33.781 Size (in LBAs): 131072 (0GiB) 00:15:33.781 Capacity (in LBAs): 131072 (0GiB) 00:15:33.781 Utilization (in LBAs): 131072 (0GiB) 00:15:33.781 NGUID: AED1A78E75FD45D387556B291D156C56 00:15:33.781 UUID: aed1a78e-75fd-45d3-8755-6b291d156c56 00:15:33.781 Thin Provisioning: Not Supported 00:15:33.781 Per-NS Atomic Units: Yes 00:15:33.781 Atomic Boundary Size (Normal): 0 00:15:33.781 Atomic Boundary Size (PFail): 0 00:15:33.781 Atomic Boundary Offset: 0 00:15:33.781 Maximum Single Source Range Length: 65535 00:15:33.781 Maximum Copy Length: 65535 00:15:33.781 Maximum Source Range Count: 1 00:15:33.781 NGUID/EUI64 Never Reused: No 00:15:33.781 Namespace Write Protected: No 00:15:33.781 Number of LBA Formats: 1 00:15:33.781 Current LBA Format: LBA Format #00 00:15:33.781 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:33.781 00:15:33.781 01:01:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:33.781 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.781 [2024-07-14 01:01:23.145659] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:39.044 Initializing NVMe Controllers 00:15:39.044 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:39.044 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:39.044 Initialization complete. Launching workers. 00:15:39.044 ======================================================== 00:15:39.044 Latency(us) 00:15:39.044 Device Information : IOPS MiB/s Average min max 00:15:39.044 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34878.39 136.24 3670.65 1184.03 9540.01 00:15:39.044 ======================================================== 00:15:39.044 Total : 34878.39 136.24 3670.65 1184.03 9540.01 00:15:39.044 00:15:39.044 [2024-07-14 01:01:28.253247] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:39.044 01:01:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:39.044 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.301 [2024-07-14 01:01:28.494983] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:44.566 Initializing NVMe Controllers 00:15:44.566 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:44.566 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:44.566 Initialization complete. Launching workers. 00:15:44.566 ======================================================== 00:15:44.566 Latency(us) 00:15:44.566 Device Information : IOPS MiB/s Average min max 00:15:44.566 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32076.51 125.30 3989.45 1196.78 10242.67 00:15:44.566 ======================================================== 00:15:44.566 Total : 32076.51 125.30 3989.45 1196.78 10242.67 00:15:44.566 00:15:44.566 [2024-07-14 01:01:33.514382] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:44.566 01:01:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:44.566 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.566 [2024-07-14 01:01:33.722147] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:49.840 [2024-07-14 01:01:38.862997] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:49.840 Initializing NVMe Controllers 00:15:49.840 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:49.840 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:49.840 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:49.840 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:49.840 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:49.840 Initialization complete. Launching workers. 00:15:49.840 Starting thread on core 2 00:15:49.840 Starting thread on core 3 00:15:49.840 Starting thread on core 1 00:15:49.840 01:01:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:49.840 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.840 [2024-07-14 01:01:39.166608] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:53.130 [2024-07-14 01:01:42.248122] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:53.130 Initializing NVMe Controllers 00:15:53.130 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:53.130 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:53.130 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:53.130 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:53.130 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:53.130 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:53.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:53.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:53.130 Initialization complete. Launching workers. 00:15:53.130 Starting thread on core 1 with urgent priority queue 00:15:53.130 Starting thread on core 2 with urgent priority queue 00:15:53.130 Starting thread on core 3 with urgent priority queue 00:15:53.130 Starting thread on core 0 with urgent priority queue 00:15:53.130 SPDK bdev Controller (SPDK2 ) core 0: 5263.33 IO/s 19.00 secs/100000 ios 00:15:53.130 SPDK bdev Controller (SPDK2 ) core 1: 5678.00 IO/s 17.61 secs/100000 ios 00:15:53.130 SPDK bdev Controller (SPDK2 ) core 2: 5323.33 IO/s 18.79 secs/100000 ios 00:15:53.130 SPDK bdev Controller (SPDK2 ) core 3: 5909.00 IO/s 16.92 secs/100000 ios 00:15:53.130 ======================================================== 00:15:53.130 00:15:53.130 01:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:53.130 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.389 [2024-07-14 01:01:42.547465] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:53.389 Initializing NVMe Controllers 00:15:53.389 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:53.389 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:53.389 Namespace ID: 1 size: 0GB 00:15:53.389 Initialization complete. 00:15:53.389 INFO: using host memory buffer for IO 00:15:53.389 Hello world! 00:15:53.389 [2024-07-14 01:01:42.556564] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:53.389 01:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:53.389 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.648 [2024-07-14 01:01:42.832186] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:54.587 Initializing NVMe Controllers 00:15:54.587 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:54.587 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:54.587 Initialization complete. Launching workers. 00:15:54.587 submit (in ns) avg, min, max = 8447.9, 3513.3, 6995140.0 00:15:54.587 complete (in ns) avg, min, max = 25894.0, 2082.2, 4028875.6 00:15:54.587 00:15:54.587 Submit histogram 00:15:54.587 ================ 00:15:54.587 Range in us Cumulative Count 00:15:54.587 3.508 - 3.532: 0.1220% ( 16) 00:15:54.587 3.532 - 3.556: 0.6176% ( 65) 00:15:54.587 3.556 - 3.579: 2.4093% ( 235) 00:15:54.587 3.579 - 3.603: 5.4895% ( 404) 00:15:54.587 3.603 - 3.627: 11.4822% ( 786) 00:15:54.587 3.627 - 3.650: 19.7697% ( 1087) 00:15:54.587 3.650 - 3.674: 29.1095% ( 1225) 00:15:54.587 3.674 - 3.698: 37.4276% ( 1091) 00:15:54.587 3.698 - 3.721: 46.5233% ( 1193) 00:15:54.587 3.721 - 3.745: 52.5770% ( 794) 00:15:54.587 3.745 - 3.769: 57.3422% ( 625) 00:15:54.587 3.769 - 3.793: 60.9408% ( 472) 00:15:54.587 3.793 - 3.816: 64.4556% ( 461) 00:15:54.587 3.816 - 3.840: 67.6578% ( 420) 00:15:54.587 3.840 - 3.864: 71.2946% ( 477) 00:15:54.587 3.864 - 3.887: 75.1067% ( 500) 00:15:54.587 3.887 - 3.911: 78.7054% ( 472) 00:15:54.587 3.911 - 3.935: 82.3269% ( 475) 00:15:54.587 3.935 - 3.959: 85.3233% ( 393) 00:15:54.587 3.959 - 3.982: 87.5267% ( 289) 00:15:54.587 3.982 - 4.006: 89.4633% ( 254) 00:15:54.587 4.006 - 4.030: 90.8814% ( 186) 00:15:54.587 4.030 - 4.053: 91.8954% ( 133) 00:15:54.587 4.053 - 4.077: 92.8637% ( 127) 00:15:54.587 4.077 - 4.101: 93.7557% ( 117) 00:15:54.587 4.101 - 4.124: 94.5486% ( 104) 00:15:54.587 4.124 - 4.148: 95.1891% ( 84) 00:15:54.587 4.148 - 4.172: 95.6999% ( 67) 00:15:54.587 4.172 - 4.196: 96.0659% ( 48) 00:15:54.587 4.196 - 4.219: 96.3022% ( 31) 00:15:54.587 4.219 - 4.243: 96.4623% ( 21) 00:15:54.587 4.243 - 4.267: 96.5996% ( 18) 00:15:54.587 4.267 - 4.290: 96.7368% ( 18) 00:15:54.588 4.290 - 4.314: 96.8283% ( 12) 00:15:54.588 4.314 - 4.338: 96.9122% ( 11) 00:15:54.588 4.338 - 4.361: 97.0189% ( 14) 00:15:54.588 4.361 - 4.385: 97.0570% ( 5) 00:15:54.588 4.385 - 4.409: 97.1104% ( 7) 00:15:54.588 4.409 - 4.433: 97.1409% ( 4) 00:15:54.588 4.433 - 4.456: 97.1638% ( 3) 00:15:54.588 4.456 - 4.480: 97.1866% ( 3) 00:15:54.588 4.480 - 4.504: 97.2171% ( 4) 00:15:54.588 4.504 - 4.527: 97.2248% ( 1) 00:15:54.588 4.527 - 4.551: 97.2324% ( 1) 00:15:54.588 4.551 - 4.575: 97.2476% ( 2) 00:15:54.588 4.575 - 4.599: 97.2553% ( 1) 00:15:54.588 4.599 - 4.622: 97.2705% ( 2) 00:15:54.588 4.622 - 4.646: 97.2781% ( 1) 00:15:54.588 4.646 - 4.670: 97.2934% ( 2) 00:15:54.588 4.670 - 4.693: 97.3086% ( 2) 00:15:54.588 4.693 - 4.717: 97.3163% ( 1) 00:15:54.588 4.717 - 4.741: 97.3239% ( 1) 00:15:54.588 4.741 - 4.764: 97.3620% ( 5) 00:15:54.588 4.764 - 4.788: 97.4382% ( 10) 00:15:54.588 4.788 - 4.812: 97.4764% ( 5) 00:15:54.588 4.812 - 4.836: 97.5374% ( 8) 00:15:54.588 4.836 - 4.859: 97.5755% ( 5) 00:15:54.588 4.859 - 4.883: 97.6746% ( 13) 00:15:54.588 4.883 - 4.907: 97.7051% ( 4) 00:15:54.588 4.907 - 4.930: 97.7661% ( 8) 00:15:54.588 4.930 - 4.954: 97.8042% ( 5) 00:15:54.588 4.954 - 4.978: 97.8805% ( 10) 00:15:54.588 4.978 - 5.001: 97.9567% ( 10) 00:15:54.588 5.001 - 5.025: 97.9872% ( 4) 00:15:54.588 5.025 - 5.049: 98.0101% ( 3) 00:15:54.588 5.049 - 5.073: 98.0406% ( 4) 00:15:54.588 5.073 - 5.096: 98.0634% ( 3) 00:15:54.588 5.096 - 5.120: 98.1168% ( 7) 00:15:54.588 5.120 - 5.144: 98.1397% ( 3) 00:15:54.588 5.144 - 5.167: 98.1702% ( 4) 00:15:54.588 5.191 - 5.215: 98.1930% ( 3) 00:15:54.588 5.215 - 5.239: 98.2159% ( 3) 00:15:54.588 5.239 - 5.262: 98.2388% ( 3) 00:15:54.588 5.262 - 5.286: 98.2540% ( 2) 00:15:54.588 5.286 - 5.310: 98.2769% ( 3) 00:15:54.588 5.310 - 5.333: 98.2922% ( 2) 00:15:54.588 5.333 - 5.357: 98.2998% ( 1) 00:15:54.588 5.357 - 5.381: 98.3150% ( 2) 00:15:54.588 5.428 - 5.452: 98.3227% ( 1) 00:15:54.588 5.452 - 5.476: 98.3379% ( 2) 00:15:54.588 5.499 - 5.523: 98.3532% ( 2) 00:15:54.588 5.547 - 5.570: 98.3684% ( 2) 00:15:54.588 5.570 - 5.594: 98.3989% ( 4) 00:15:54.588 5.594 - 5.618: 98.4065% ( 1) 00:15:54.588 5.665 - 5.689: 98.4142% ( 1) 00:15:54.588 5.736 - 5.760: 98.4218% ( 1) 00:15:54.588 5.760 - 5.784: 98.4370% ( 2) 00:15:54.588 5.784 - 5.807: 98.4599% ( 3) 00:15:54.588 5.831 - 5.855: 98.4675% ( 1) 00:15:54.588 5.855 - 5.879: 98.4751% ( 1) 00:15:54.588 5.879 - 5.902: 98.4828% ( 1) 00:15:54.588 5.902 - 5.926: 98.4904% ( 1) 00:15:54.588 5.926 - 5.950: 98.4980% ( 1) 00:15:54.588 6.068 - 6.116: 98.5133% ( 2) 00:15:54.588 6.116 - 6.163: 98.5209% ( 1) 00:15:54.588 6.400 - 6.447: 98.5285% ( 1) 00:15:54.588 6.447 - 6.495: 98.5361% ( 1) 00:15:54.588 6.542 - 6.590: 98.5438% ( 1) 00:15:54.588 6.874 - 6.921: 98.5514% ( 1) 00:15:54.588 7.016 - 7.064: 98.5590% ( 1) 00:15:54.588 7.064 - 7.111: 98.5743% ( 2) 00:15:54.588 7.111 - 7.159: 98.5819% ( 1) 00:15:54.588 7.206 - 7.253: 98.5895% ( 1) 00:15:54.588 7.490 - 7.538: 98.5971% ( 1) 00:15:54.588 7.585 - 7.633: 98.6048% ( 1) 00:15:54.588 7.633 - 7.680: 98.6124% ( 1) 00:15:54.588 7.680 - 7.727: 98.6200% ( 1) 00:15:54.588 7.775 - 7.822: 98.6276% ( 1) 00:15:54.588 7.822 - 7.870: 98.6429% ( 2) 00:15:54.588 7.870 - 7.917: 98.6581% ( 2) 00:15:54.588 7.917 - 7.964: 98.6658% ( 1) 00:15:54.588 7.964 - 8.012: 98.6886% ( 3) 00:15:54.588 8.154 - 8.201: 98.6962% ( 1) 00:15:54.588 8.296 - 8.344: 98.7039% ( 1) 00:15:54.588 8.344 - 8.391: 98.7115% ( 1) 00:15:54.588 8.391 - 8.439: 98.7191% ( 1) 00:15:54.588 8.439 - 8.486: 98.7344% ( 2) 00:15:54.588 8.723 - 8.770: 98.7420% ( 1) 00:15:54.588 8.770 - 8.818: 98.7496% ( 1) 00:15:54.588 8.818 - 8.865: 98.7572% ( 1) 00:15:54.588 8.960 - 9.007: 98.7649% ( 1) 00:15:54.588 9.007 - 9.055: 98.7725% ( 1) 00:15:54.588 9.055 - 9.102: 98.7801% ( 1) 00:15:54.588 9.102 - 9.150: 98.7877% ( 1) 00:15:54.588 9.197 - 9.244: 98.7954% ( 1) 00:15:54.588 9.292 - 9.339: 98.8030% ( 1) 00:15:54.588 9.387 - 9.434: 98.8106% ( 1) 00:15:54.588 9.481 - 9.529: 98.8182% ( 1) 00:15:54.588 9.624 - 9.671: 98.8259% ( 1) 00:15:54.588 9.719 - 9.766: 98.8335% ( 1) 00:15:54.588 9.861 - 9.908: 98.8411% ( 1) 00:15:54.588 10.098 - 10.145: 98.8487% ( 1) 00:15:54.588 10.193 - 10.240: 98.8564% ( 1) 00:15:54.588 10.335 - 10.382: 98.8716% ( 2) 00:15:54.588 10.430 - 10.477: 98.8792% ( 1) 00:15:54.588 10.809 - 10.856: 98.8869% ( 1) 00:15:54.588 10.856 - 10.904: 98.8945% ( 1) 00:15:54.588 10.951 - 10.999: 98.9021% ( 1) 00:15:54.588 11.093 - 11.141: 98.9097% ( 1) 00:15:54.588 11.710 - 11.757: 98.9174% ( 1) 00:15:54.588 11.757 - 11.804: 98.9250% ( 1) 00:15:54.588 11.852 - 11.899: 98.9326% ( 1) 00:15:54.588 12.089 - 12.136: 98.9402% ( 1) 00:15:54.588 12.231 - 12.326: 98.9478% ( 1) 00:15:54.588 12.610 - 12.705: 98.9631% ( 2) 00:15:54.588 12.705 - 12.800: 98.9707% ( 1) 00:15:54.588 13.084 - 13.179: 98.9783% ( 1) 00:15:54.588 13.369 - 13.464: 98.9860% ( 1) 00:15:54.588 13.653 - 13.748: 98.9936% ( 1) 00:15:54.588 14.033 - 14.127: 99.0088% ( 2) 00:15:54.588 17.256 - 17.351: 99.0317% ( 3) 00:15:54.588 17.351 - 17.446: 99.0622% ( 4) 00:15:54.588 17.446 - 17.541: 99.0927% ( 4) 00:15:54.588 17.541 - 17.636: 99.1232% ( 4) 00:15:54.588 17.636 - 17.730: 99.1537% ( 4) 00:15:54.588 17.730 - 17.825: 99.1918% ( 5) 00:15:54.588 17.825 - 17.920: 99.2528% ( 8) 00:15:54.588 17.920 - 18.015: 99.2986% ( 6) 00:15:54.588 18.015 - 18.110: 99.3519% ( 7) 00:15:54.588 18.110 - 18.204: 99.4663% ( 15) 00:15:54.588 18.204 - 18.299: 99.5197% ( 7) 00:15:54.588 18.299 - 18.394: 99.5502% ( 4) 00:15:54.588 18.394 - 18.489: 99.6264% ( 10) 00:15:54.588 18.489 - 18.584: 99.7027% ( 10) 00:15:54.588 18.584 - 18.679: 99.7255% ( 3) 00:15:54.588 18.679 - 18.773: 99.7560% ( 4) 00:15:54.588 18.773 - 18.868: 99.7941% ( 5) 00:15:54.588 18.868 - 18.963: 99.8170% ( 3) 00:15:54.588 18.963 - 19.058: 99.8246% ( 1) 00:15:54.588 19.058 - 19.153: 99.8399% ( 2) 00:15:54.588 19.247 - 19.342: 99.8475% ( 1) 00:15:54.588 19.532 - 19.627: 99.8551% ( 1) 00:15:54.588 19.911 - 20.006: 99.8628% ( 1) 00:15:54.588 22.756 - 22.850: 99.8704% ( 1) 00:15:54.588 24.462 - 24.652: 99.8780% ( 1) 00:15:54.588 24.652 - 24.841: 99.8856% ( 1) 00:15:54.588 28.065 - 28.255: 99.8933% ( 1) 00:15:54.588 3980.705 - 4004.978: 99.9695% ( 10) 00:15:54.588 4004.978 - 4029.250: 99.9924% ( 3) 00:15:54.588 6990.507 - 7039.052: 100.0000% ( 1) 00:15:54.588 00:15:54.588 Complete histogram 00:15:54.588 ================== 00:15:54.588 Range in us Cumulative Count 00:15:54.588 2.074 - 2.086: 0.8463% ( 111) 00:15:54.588 2.086 - 2.098: 31.1985% ( 3981) 00:15:54.588 2.098 - 2.110: 46.8435% ( 2052) 00:15:54.588 2.110 - 2.121: 49.8399% ( 393) 00:15:54.588 2.121 - 2.133: 59.1110% ( 1216) 00:15:54.588 2.133 - 2.145: 62.5496% ( 451) 00:15:54.588 2.145 - 2.157: 65.0961% ( 334) 00:15:54.588 2.157 - 2.169: 74.7941% ( 1272) 00:15:54.588 2.169 - 2.181: 77.5618% ( 363) 00:15:54.588 2.181 - 2.193: 78.8731% ( 172) 00:15:54.588 2.193 - 2.204: 82.5023% ( 476) 00:15:54.588 2.204 - 2.216: 83.4477% ( 124) 00:15:54.588 2.216 - 2.228: 84.2330% ( 103) 00:15:54.588 2.228 - 2.240: 87.5572% ( 436) 00:15:54.588 2.240 - 2.252: 89.7682% ( 290) 00:15:54.588 2.252 - 2.264: 91.5980% ( 240) 00:15:54.588 2.264 - 2.276: 92.9933% ( 183) 00:15:54.588 2.276 - 2.287: 93.5422% ( 72) 00:15:54.588 2.287 - 2.299: 93.7633% ( 29) 00:15:54.588 2.299 - 2.311: 93.9082% ( 19) 00:15:54.588 2.311 - 2.323: 94.3657% ( 60) 00:15:54.588 2.323 - 2.335: 94.9375% ( 75) 00:15:54.588 2.335 - 2.347: 95.1052% ( 22) 00:15:54.588 2.347 - 2.359: 95.1662% ( 8) 00:15:54.588 2.359 - 2.370: 95.2272% ( 8) 00:15:54.588 2.370 - 2.382: 95.3492% ( 16) 00:15:54.588 2.382 - 2.394: 95.5550% ( 27) 00:15:54.588 2.394 - 2.406: 95.9286% ( 49) 00:15:54.588 2.406 - 2.418: 96.3403% ( 54) 00:15:54.588 2.418 - 2.430: 96.7139% ( 49) 00:15:54.588 2.430 - 2.441: 96.8969% ( 24) 00:15:54.588 2.441 - 2.453: 97.0494% ( 20) 00:15:54.588 2.453 - 2.465: 97.2705% ( 29) 00:15:54.588 2.465 - 2.477: 97.4001% ( 17) 00:15:54.588 2.477 - 2.489: 97.4992% ( 13) 00:15:54.588 2.489 - 2.501: 97.6822% ( 24) 00:15:54.588 2.501 - 2.513: 97.7585% ( 10) 00:15:54.588 2.513 - 2.524: 97.8347% ( 10) 00:15:54.588 2.524 - 2.536: 97.8728% ( 5) 00:15:54.588 2.536 - 2.548: 97.9109% ( 5) 00:15:54.588 2.548 - 2.560: 97.9643% ( 7) 00:15:54.588 2.560 - 2.572: 98.0024% ( 5) 00:15:54.589 2.572 - 2.584: 98.0101% ( 1) 00:15:54.589 2.584 - 2.596: 98.0329% ( 3) 00:15:54.589 2.596 - 2.607: 98.0406% ( 1) 00:15:54.589 2.607 - 2.619: 98.0482% ( 1) 00:15:54.589 2.619 - 2.631: 98.0558% ( 1) 00:15:54.589 2.643 - 2.655: 98.0711% ( 2) 00:15:54.589 2.667 - 2.679: 98.0787% ( 1) 00:15:54.589 2.679 - 2.690: 98.0939% ( 2) 00:15:54.589 2.690 - 2.702: 98.1092% ( 2) 00:15:54.589 2.714 - 2.726: 98.1244% ( 2) 00:15:54.589 2.738 - 2.750: 98.1321% ( 1) 00:15:54.589 2.750 - 2.761: 98.1397% ( 1) 00:15:54.589 2.773 - 2.785: 98.1473% ( 1) 00:15:54.589 2.809 - 2.821: 98.1549% ( 1) 00:15:54.589 2.844 - 2.856: 98.1702% ( 2) 00:15:54.589 2.880 - 2.892: 98.1778% ( 1) 00:15:54.589 2.892 - 2.904: 98.1854% ( 1) 00:15:54.589 2.939 - 2.951: 98.2007% ( 2) 00:15:54.589 2.975 - 2.987: 98.2083% ( 1) 00:15:54.589 2.987 - 2.999: 98.2312% ( 3) 00:15:54.589 3.022 - 3.034: 98.2388% ( 1) 00:15:54.589 3.034 - 3.058: 98.2464% ( 1) 00:15:54.589 3.058 - 3.081: 98.2617% ( 2) 00:15:54.589 3.105 - 3.129: 98.2769% ( 2) 00:15:54.589 3.176 - 3.200: 98.2845% ( 1) 00:15:54.589 3.224 - 3.247: 98.2998% ( 2) 00:15:54.589 3.271 - 3.295: 98.3150% ( 2) 00:15:54.589 3.295 - 3.319: 98.3227% ( 1) 00:15:54.589 3.413 - 3.437: 98.3303% ( 1) 00:15:54.589 3.437 - 3.461: 9[2024-07-14 01:01:43.937660] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:54.589 8.3379% ( 1) 00:15:54.589 3.461 - 3.484: 98.3455% ( 1) 00:15:54.589 3.484 - 3.508: 98.3684% ( 3) 00:15:54.589 3.508 - 3.532: 98.3837% ( 2) 00:15:54.589 3.532 - 3.556: 98.3913% ( 1) 00:15:54.589 3.556 - 3.579: 98.3989% ( 1) 00:15:54.589 3.579 - 3.603: 98.4065% ( 1) 00:15:54.589 3.603 - 3.627: 98.4370% ( 4) 00:15:54.589 3.627 - 3.650: 98.4675% ( 4) 00:15:54.589 3.650 - 3.674: 98.4828% ( 2) 00:15:54.589 3.674 - 3.698: 98.4980% ( 2) 00:15:54.589 3.698 - 3.721: 98.5361% ( 5) 00:15:54.589 3.721 - 3.745: 98.5438% ( 1) 00:15:54.589 3.745 - 3.769: 98.5666% ( 3) 00:15:54.589 3.769 - 3.793: 98.5895% ( 3) 00:15:54.589 3.793 - 3.816: 98.6048% ( 2) 00:15:54.589 3.816 - 3.840: 98.6124% ( 1) 00:15:54.589 3.840 - 3.864: 98.6200% ( 1) 00:15:54.589 3.864 - 3.887: 98.6429% ( 3) 00:15:54.589 3.935 - 3.959: 98.6505% ( 1) 00:15:54.589 4.006 - 4.030: 98.6581% ( 1) 00:15:54.589 4.053 - 4.077: 98.6734% ( 2) 00:15:54.589 4.930 - 4.954: 98.6810% ( 1) 00:15:54.589 5.594 - 5.618: 98.6886% ( 1) 00:15:54.589 5.760 - 5.784: 98.6962% ( 1) 00:15:54.589 5.973 - 5.997: 98.7039% ( 1) 00:15:54.589 6.116 - 6.163: 98.7115% ( 1) 00:15:54.589 6.258 - 6.305: 98.7191% ( 1) 00:15:54.589 6.542 - 6.590: 98.7344% ( 2) 00:15:54.589 6.637 - 6.684: 98.7420% ( 1) 00:15:54.589 6.732 - 6.779: 98.7496% ( 1) 00:15:54.589 6.874 - 6.921: 98.7572% ( 1) 00:15:54.589 6.969 - 7.016: 98.7649% ( 1) 00:15:54.589 7.633 - 7.680: 98.7725% ( 1) 00:15:54.589 7.822 - 7.870: 98.7801% ( 1) 00:15:54.589 7.917 - 7.964: 98.7877% ( 1) 00:15:54.589 8.059 - 8.107: 98.7954% ( 1) 00:15:54.589 8.723 - 8.770: 98.8030% ( 1) 00:15:54.589 9.055 - 9.102: 98.8106% ( 1) 00:15:54.589 11.804 - 11.852: 98.8182% ( 1) 00:15:54.589 12.990 - 13.084: 98.8259% ( 1) 00:15:54.589 15.455 - 15.550: 98.8335% ( 1) 00:15:54.589 15.644 - 15.739: 98.8487% ( 2) 00:15:54.589 15.739 - 15.834: 98.8716% ( 3) 00:15:54.589 15.834 - 15.929: 98.8945% ( 3) 00:15:54.589 15.929 - 16.024: 98.9250% ( 4) 00:15:54.589 16.024 - 16.119: 98.9402% ( 2) 00:15:54.589 16.119 - 16.213: 98.9707% ( 4) 00:15:54.589 16.213 - 16.308: 99.0165% ( 6) 00:15:54.589 16.308 - 16.403: 99.0470% ( 4) 00:15:54.589 16.403 - 16.498: 99.0927% ( 6) 00:15:54.589 16.498 - 16.593: 99.1156% ( 3) 00:15:54.589 16.593 - 16.687: 99.1766% ( 8) 00:15:54.589 16.687 - 16.782: 99.2147% ( 5) 00:15:54.589 16.782 - 16.877: 99.2452% ( 4) 00:15:54.589 16.877 - 16.972: 99.2528% ( 1) 00:15:54.589 16.972 - 17.067: 99.2757% ( 3) 00:15:54.589 17.161 - 17.256: 99.2833% ( 1) 00:15:54.589 17.256 - 17.351: 99.3062% ( 3) 00:15:54.589 17.351 - 17.446: 99.3138% ( 1) 00:15:54.589 17.446 - 17.541: 99.3214% ( 1) 00:15:54.589 17.730 - 17.825: 99.3291% ( 1) 00:15:54.589 17.825 - 17.920: 99.3367% ( 1) 00:15:54.589 18.015 - 18.110: 99.3443% ( 1) 00:15:54.589 18.110 - 18.204: 99.3519% ( 1) 00:15:54.589 18.489 - 18.584: 99.3596% ( 1) 00:15:54.589 18.584 - 18.679: 99.3672% ( 1) 00:15:54.589 18.679 - 18.773: 99.3824% ( 2) 00:15:54.589 18.963 - 19.058: 99.3901% ( 1) 00:15:54.589 25.031 - 25.221: 99.3977% ( 1) 00:15:54.589 89.126 - 89.505: 99.4053% ( 1) 00:15:54.589 2002.489 - 2014.625: 99.4129% ( 1) 00:15:54.589 3980.705 - 4004.978: 99.8856% ( 62) 00:15:54.589 4004.978 - 4029.250: 100.0000% ( 15) 00:15:54.589 00:15:54.589 01:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:54.589 01:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:54.589 01:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:54.589 01:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:54.589 01:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:54.847 [ 00:15:54.847 { 00:15:54.847 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:54.847 "subtype": "Discovery", 00:15:54.847 "listen_addresses": [], 00:15:54.847 "allow_any_host": true, 00:15:54.847 "hosts": [] 00:15:54.847 }, 00:15:54.847 { 00:15:54.847 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:54.847 "subtype": "NVMe", 00:15:54.847 "listen_addresses": [ 00:15:54.847 { 00:15:54.847 "trtype": "VFIOUSER", 00:15:54.847 "adrfam": "IPv4", 00:15:54.847 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:54.847 "trsvcid": "0" 00:15:54.847 } 00:15:54.847 ], 00:15:54.847 "allow_any_host": true, 00:15:54.847 "hosts": [], 00:15:54.847 "serial_number": "SPDK1", 00:15:54.847 "model_number": "SPDK bdev Controller", 00:15:54.847 "max_namespaces": 32, 00:15:54.847 "min_cntlid": 1, 00:15:54.847 "max_cntlid": 65519, 00:15:54.847 "namespaces": [ 00:15:54.847 { 00:15:54.847 "nsid": 1, 00:15:54.847 "bdev_name": "Malloc1", 00:15:54.847 "name": "Malloc1", 00:15:54.847 "nguid": "DFB3C858D8644D4B8F1BC0EE3549A923", 00:15:54.847 "uuid": "dfb3c858-d864-4d4b-8f1b-c0ee3549a923" 00:15:54.847 }, 00:15:54.847 { 00:15:54.847 "nsid": 2, 00:15:54.847 "bdev_name": "Malloc3", 00:15:54.847 "name": "Malloc3", 00:15:54.847 "nguid": "F6E958F720D6490EA95C9ABE70054157", 00:15:54.847 "uuid": "f6e958f7-20d6-490e-a95c-9abe70054157" 00:15:54.847 } 00:15:54.847 ] 00:15:54.847 }, 00:15:54.847 { 00:15:54.847 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:54.847 "subtype": "NVMe", 00:15:54.847 "listen_addresses": [ 00:15:54.847 { 00:15:54.847 "trtype": "VFIOUSER", 00:15:54.847 "adrfam": "IPv4", 00:15:54.847 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:54.847 "trsvcid": "0" 00:15:54.847 } 00:15:54.847 ], 00:15:54.847 "allow_any_host": true, 00:15:54.847 "hosts": [], 00:15:54.847 "serial_number": "SPDK2", 00:15:54.847 "model_number": "SPDK bdev Controller", 00:15:54.847 "max_namespaces": 32, 00:15:54.847 "min_cntlid": 1, 00:15:54.847 "max_cntlid": 65519, 00:15:54.847 "namespaces": [ 00:15:54.847 { 00:15:54.847 "nsid": 1, 00:15:54.847 "bdev_name": "Malloc2", 00:15:54.847 "name": "Malloc2", 00:15:54.847 "nguid": "AED1A78E75FD45D387556B291D156C56", 00:15:54.847 "uuid": "aed1a78e-75fd-45d3-8755-6b291d156c56" 00:15:54.847 } 00:15:54.847 ] 00:15:54.847 } 00:15:54.847 ] 00:15:54.847 01:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:54.847 01:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1109414 00:15:54.847 01:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:54.848 01:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:54.848 01:01:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:54.848 01:01:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:54.848 01:01:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:54.848 01:01:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:54.848 01:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:54.848 01:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:55.105 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.105 [2024-07-14 01:01:44.385391] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:55.105 Malloc4 00:15:55.105 01:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:55.364 [2024-07-14 01:01:44.738011] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:55.364 01:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:55.623 Asynchronous Event Request test 00:15:55.623 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:55.623 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:55.623 Registering asynchronous event callbacks... 00:15:55.623 Starting namespace attribute notice tests for all controllers... 00:15:55.623 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:55.623 aer_cb - Changed Namespace 00:15:55.623 Cleaning up... 00:15:55.623 [ 00:15:55.623 { 00:15:55.623 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:55.623 "subtype": "Discovery", 00:15:55.623 "listen_addresses": [], 00:15:55.623 "allow_any_host": true, 00:15:55.623 "hosts": [] 00:15:55.623 }, 00:15:55.623 { 00:15:55.623 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:55.623 "subtype": "NVMe", 00:15:55.623 "listen_addresses": [ 00:15:55.623 { 00:15:55.623 "trtype": "VFIOUSER", 00:15:55.623 "adrfam": "IPv4", 00:15:55.623 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:55.623 "trsvcid": "0" 00:15:55.623 } 00:15:55.623 ], 00:15:55.623 "allow_any_host": true, 00:15:55.623 "hosts": [], 00:15:55.623 "serial_number": "SPDK1", 00:15:55.623 "model_number": "SPDK bdev Controller", 00:15:55.623 "max_namespaces": 32, 00:15:55.623 "min_cntlid": 1, 00:15:55.623 "max_cntlid": 65519, 00:15:55.623 "namespaces": [ 00:15:55.623 { 00:15:55.623 "nsid": 1, 00:15:55.623 "bdev_name": "Malloc1", 00:15:55.623 "name": "Malloc1", 00:15:55.623 "nguid": "DFB3C858D8644D4B8F1BC0EE3549A923", 00:15:55.623 "uuid": "dfb3c858-d864-4d4b-8f1b-c0ee3549a923" 00:15:55.623 }, 00:15:55.623 { 00:15:55.623 "nsid": 2, 00:15:55.623 "bdev_name": "Malloc3", 00:15:55.623 "name": "Malloc3", 00:15:55.623 "nguid": "F6E958F720D6490EA95C9ABE70054157", 00:15:55.623 "uuid": "f6e958f7-20d6-490e-a95c-9abe70054157" 00:15:55.623 } 00:15:55.623 ] 00:15:55.623 }, 00:15:55.623 { 00:15:55.623 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:55.623 "subtype": "NVMe", 00:15:55.623 "listen_addresses": [ 00:15:55.623 { 00:15:55.623 "trtype": "VFIOUSER", 00:15:55.623 "adrfam": "IPv4", 00:15:55.623 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:55.623 "trsvcid": "0" 00:15:55.623 } 00:15:55.623 ], 00:15:55.623 "allow_any_host": true, 00:15:55.623 "hosts": [], 00:15:55.623 "serial_number": "SPDK2", 00:15:55.623 "model_number": "SPDK bdev Controller", 00:15:55.623 "max_namespaces": 32, 00:15:55.623 "min_cntlid": 1, 00:15:55.623 "max_cntlid": 65519, 00:15:55.623 "namespaces": [ 00:15:55.623 { 00:15:55.623 "nsid": 1, 00:15:55.623 "bdev_name": "Malloc2", 00:15:55.623 "name": "Malloc2", 00:15:55.623 "nguid": "AED1A78E75FD45D387556B291D156C56", 00:15:55.623 "uuid": "aed1a78e-75fd-45d3-8755-6b291d156c56" 00:15:55.623 }, 00:15:55.623 { 00:15:55.623 "nsid": 2, 00:15:55.623 "bdev_name": "Malloc4", 00:15:55.623 "name": "Malloc4", 00:15:55.623 "nguid": "3D3F78E6849F488AA94932601D6D78FA", 00:15:55.623 "uuid": "3d3f78e6-849f-488a-a949-32601d6d78fa" 00:15:55.623 } 00:15:55.623 ] 00:15:55.623 } 00:15:55.623 ] 00:15:55.623 01:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1109414 00:15:55.623 01:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:55.623 01:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1103947 00:15:55.623 01:01:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1103947 ']' 00:15:55.623 01:01:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1103947 00:15:55.623 01:01:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:55.623 01:01:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:55.623 01:01:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1103947 00:15:55.624 01:01:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:55.624 01:01:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:55.624 01:01:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1103947' 00:15:55.624 killing process with pid 1103947 00:15:55.624 01:01:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1103947 00:15:55.624 01:01:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1103947 00:15:56.193 01:01:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:56.194 01:01:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:56.194 01:01:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:56.194 01:01:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:56.194 01:01:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:56.194 01:01:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1109558 00:15:56.194 01:01:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:56.194 01:01:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1109558' 00:15:56.194 Process pid: 1109558 00:15:56.194 01:01:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:56.194 01:01:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1109558 00:15:56.194 01:01:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1109558 ']' 00:15:56.194 01:01:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.194 01:01:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.194 01:01:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.194 01:01:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.194 01:01:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:56.194 [2024-07-14 01:01:45.402650] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:56.194 [2024-07-14 01:01:45.403650] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:56.194 [2024-07-14 01:01:45.403712] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.194 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.194 [2024-07-14 01:01:45.460915] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.194 [2024-07-14 01:01:45.547195] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.194 [2024-07-14 01:01:45.547245] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.194 [2024-07-14 01:01:45.547274] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.194 [2024-07-14 01:01:45.547286] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.194 [2024-07-14 01:01:45.547297] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.194 [2024-07-14 01:01:45.547397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.194 [2024-07-14 01:01:45.547467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.194 [2024-07-14 01:01:45.547526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.194 [2024-07-14 01:01:45.547528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.453 [2024-07-14 01:01:45.642363] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:56.453 [2024-07-14 01:01:45.642614] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:56.453 [2024-07-14 01:01:45.642915] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:56.453 [2024-07-14 01:01:45.643533] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:56.453 [2024-07-14 01:01:45.643785] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:56.453 01:01:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.453 01:01:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:56.453 01:01:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:57.391 01:01:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:57.649 01:01:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:57.649 01:01:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:57.649 01:01:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:57.649 01:01:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:57.649 01:01:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:57.907 Malloc1 00:15:57.907 01:01:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:58.168 01:01:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:58.450 01:01:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:58.711 01:01:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:58.711 01:01:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:58.711 01:01:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:58.969 Malloc2 00:15:58.969 01:01:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:59.227 01:01:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:59.485 01:01:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:59.744 01:01:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:59.744 01:01:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1109558 00:15:59.744 01:01:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1109558 ']' 00:15:59.744 01:01:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1109558 00:15:59.744 01:01:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:59.744 01:01:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:59.744 01:01:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1109558 00:15:59.744 01:01:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:59.744 01:01:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:59.744 01:01:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1109558' 00:15:59.744 killing process with pid 1109558 00:15:59.744 01:01:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1109558 00:15:59.744 01:01:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1109558 00:16:00.003 01:01:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:00.003 01:01:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:00.003 00:16:00.003 real 0m52.271s 00:16:00.003 user 3m26.458s 00:16:00.003 sys 0m4.378s 00:16:00.003 01:01:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:00.003 01:01:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:00.003 ************************************ 00:16:00.003 END TEST nvmf_vfio_user 00:16:00.003 ************************************ 00:16:00.003 01:01:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:00.003 01:01:49 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:00.003 01:01:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:00.003 01:01:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:00.003 01:01:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:00.003 ************************************ 00:16:00.003 START TEST nvmf_vfio_user_nvme_compliance 00:16:00.003 ************************************ 00:16:00.003 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:00.003 * Looking for test storage... 00:16:00.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:00.003 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:00.003 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:00.003 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.003 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.003 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.003 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.003 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.003 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.003 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.003 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.003 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.003 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1110148 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1110148' 00:16:00.263 Process pid: 1110148 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1110148 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1110148 ']' 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.263 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:00.263 [2024-07-14 01:01:49.474087] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:00.263 [2024-07-14 01:01:49.474162] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.263 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.263 [2024-07-14 01:01:49.530942] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:00.263 [2024-07-14 01:01:49.615658] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.263 [2024-07-14 01:01:49.615712] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.263 [2024-07-14 01:01:49.615741] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.263 [2024-07-14 01:01:49.615753] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.263 [2024-07-14 01:01:49.615762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.263 [2024-07-14 01:01:49.615832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.263 [2024-07-14 01:01:49.615924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.263 [2024-07-14 01:01:49.615927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.522 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:00.522 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:16:00.522 01:01:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:01.457 malloc0 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.457 01:01:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:01.457 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.717 00:16:01.717 00:16:01.717 CUnit - A unit testing framework for C - Version 2.1-3 00:16:01.717 http://cunit.sourceforge.net/ 00:16:01.717 00:16:01.717 00:16:01.717 Suite: nvme_compliance 00:16:01.717 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-14 01:01:50.973554] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:01.717 [2024-07-14 01:01:50.975081] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:01.717 [2024-07-14 01:01:50.975107] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:01.717 [2024-07-14 01:01:50.975120] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:01.717 [2024-07-14 01:01:50.976585] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:01.717 passed 00:16:01.717 Test: admin_identify_ctrlr_verify_fused ...[2024-07-14 01:01:51.058199] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:01.717 [2024-07-14 01:01:51.061219] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:01.717 passed 00:16:01.977 Test: admin_identify_ns ...[2024-07-14 01:01:51.148645] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:01.977 [2024-07-14 01:01:51.210881] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:01.977 [2024-07-14 01:01:51.218880] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:01.977 [2024-07-14 01:01:51.240028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:01.977 passed 00:16:01.977 Test: admin_get_features_mandatory_features ...[2024-07-14 01:01:51.324379] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:01.977 [2024-07-14 01:01:51.327399] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:01.977 passed 00:16:02.237 Test: admin_get_features_optional_features ...[2024-07-14 01:01:51.413959] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.237 [2024-07-14 01:01:51.416982] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:02.237 passed 00:16:02.237 Test: admin_set_features_number_of_queues ...[2024-07-14 01:01:51.502390] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.237 [2024-07-14 01:01:51.607121] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:02.237 passed 00:16:02.497 Test: admin_get_log_page_mandatory_logs ...[2024-07-14 01:01:51.690188] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.497 [2024-07-14 01:01:51.693224] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:02.497 passed 00:16:02.497 Test: admin_get_log_page_with_lpo ...[2024-07-14 01:01:51.779662] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.497 [2024-07-14 01:01:51.846892] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:02.497 [2024-07-14 01:01:51.859962] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:02.497 passed 00:16:02.756 Test: fabric_property_get ...[2024-07-14 01:01:51.943570] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.756 [2024-07-14 01:01:51.944857] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:02.756 [2024-07-14 01:01:51.946595] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:02.756 passed 00:16:02.756 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-14 01:01:52.029121] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:02.756 [2024-07-14 01:01:52.030408] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:02.756 [2024-07-14 01:01:52.032165] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:02.756 passed 00:16:02.756 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-14 01:01:52.115307] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.015 [2024-07-14 01:01:52.197891] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:03.015 [2024-07-14 01:01:52.213876] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:03.015 [2024-07-14 01:01:52.219036] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.015 passed 00:16:03.015 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-14 01:01:52.302611] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.015 [2024-07-14 01:01:52.303937] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:03.015 [2024-07-14 01:01:52.305633] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.015 passed 00:16:03.015 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-14 01:01:52.387231] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.275 [2024-07-14 01:01:52.462890] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:03.275 [2024-07-14 01:01:52.486895] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:03.275 [2024-07-14 01:01:52.491989] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.275 passed 00:16:03.275 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-14 01:01:52.578166] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.275 [2024-07-14 01:01:52.579464] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:03.275 [2024-07-14 01:01:52.579520] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:03.275 [2024-07-14 01:01:52.581189] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.275 passed 00:16:03.275 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-14 01:01:52.663399] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.535 [2024-07-14 01:01:52.754891] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:03.535 [2024-07-14 01:01:52.762888] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:03.535 [2024-07-14 01:01:52.770887] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:03.535 [2024-07-14 01:01:52.778880] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:03.535 [2024-07-14 01:01:52.807997] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.535 passed 00:16:03.535 Test: admin_create_io_sq_verify_pc ...[2024-07-14 01:01:52.892419] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:03.535 [2024-07-14 01:01:52.907900] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:03.535 [2024-07-14 01:01:52.925753] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:03.794 passed 00:16:03.794 Test: admin_create_io_qp_max_qps ...[2024-07-14 01:01:53.010332] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:04.730 [2024-07-14 01:01:54.102897] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:05.297 [2024-07-14 01:01:54.497423] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.297 passed 00:16:05.297 Test: admin_create_io_sq_shared_cq ...[2024-07-14 01:01:54.578316] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:05.297 [2024-07-14 01:01:54.710883] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:05.557 [2024-07-14 01:01:54.746957] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:05.557 passed 00:16:05.557 00:16:05.557 Run Summary: Type Total Ran Passed Failed Inactive 00:16:05.557 suites 1 1 n/a 0 0 00:16:05.557 tests 18 18 18 0 0 00:16:05.557 asserts 360 360 360 0 n/a 00:16:05.557 00:16:05.557 Elapsed time = 1.566 seconds 00:16:05.557 01:01:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1110148 00:16:05.557 01:01:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1110148 ']' 00:16:05.557 01:01:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1110148 00:16:05.557 01:01:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:16:05.557 01:01:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:05.557 01:01:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1110148 00:16:05.557 01:01:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:05.557 01:01:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:05.557 01:01:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1110148' 00:16:05.557 killing process with pid 1110148 00:16:05.557 01:01:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1110148 00:16:05.557 01:01:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1110148 00:16:05.816 01:01:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:05.816 01:01:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:05.816 00:16:05.816 real 0m5.736s 00:16:05.816 user 0m16.163s 00:16:05.816 sys 0m0.567s 00:16:05.816 01:01:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:05.816 01:01:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:05.816 ************************************ 00:16:05.816 END TEST nvmf_vfio_user_nvme_compliance 00:16:05.816 ************************************ 00:16:05.816 01:01:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:05.816 01:01:55 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:05.816 01:01:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:05.816 01:01:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:05.816 01:01:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:05.816 ************************************ 00:16:05.816 START TEST nvmf_vfio_user_fuzz 00:16:05.816 ************************************ 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:05.817 * Looking for test storage... 00:16:05.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1110872 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1110872' 00:16:05.817 Process pid: 1110872 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1110872 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1110872 ']' 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:05.817 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:06.385 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.385 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:16:06.385 01:01:55 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:07.323 malloc0 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:07.323 01:01:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:39.420 Fuzzing completed. Shutting down the fuzz application 00:16:39.420 00:16:39.420 Dumping successful admin opcodes: 00:16:39.420 8, 9, 10, 24, 00:16:39.420 Dumping successful io opcodes: 00:16:39.420 0, 00:16:39.420 NS: 0x200003a1ef00 I/O qp, Total commands completed: 599873, total successful commands: 2320, random_seed: 1334130816 00:16:39.420 NS: 0x200003a1ef00 admin qp, Total commands completed: 76220, total successful commands: 593, random_seed: 2167922816 00:16:39.420 01:02:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:39.420 01:02:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.420 01:02:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:39.421 01:02:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.421 01:02:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1110872 00:16:39.421 01:02:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1110872 ']' 00:16:39.421 01:02:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1110872 00:16:39.421 01:02:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:16:39.421 01:02:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:39.421 01:02:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1110872 00:16:39.421 01:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:39.421 01:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:39.421 01:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1110872' 00:16:39.421 killing process with pid 1110872 00:16:39.421 01:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1110872 00:16:39.421 01:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1110872 00:16:39.421 01:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:39.421 01:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:39.421 00:16:39.421 real 0m32.163s 00:16:39.421 user 0m31.656s 00:16:39.421 sys 0m29.911s 00:16:39.421 01:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:39.421 01:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:39.421 ************************************ 00:16:39.421 END TEST nvmf_vfio_user_fuzz 00:16:39.421 ************************************ 00:16:39.421 01:02:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:39.421 01:02:27 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:39.421 01:02:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:39.421 01:02:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.421 01:02:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:39.421 ************************************ 00:16:39.421 START TEST nvmf_host_management 00:16:39.421 ************************************ 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:39.421 * Looking for test storage... 00:16:39.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:39.421 01:02:27 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:40.360 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:40.360 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:40.360 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:40.360 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.360 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:40.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:16:40.360 00:16:40.360 --- 10.0.0.2 ping statistics --- 00:16:40.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.360 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:40.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:16:40.361 00:16:40.361 --- 10.0.0.1 ping statistics --- 00:16:40.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.361 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1116309 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1116309 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1116309 ']' 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:40.361 01:02:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.361 [2024-07-14 01:02:29.739412] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:40.361 [2024-07-14 01:02:29.739486] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.620 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.620 [2024-07-14 01:02:29.807976] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.620 [2024-07-14 01:02:29.895360] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.620 [2024-07-14 01:02:29.895429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.620 [2024-07-14 01:02:29.895456] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.620 [2024-07-14 01:02:29.895467] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.620 [2024-07-14 01:02:29.895477] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.620 [2024-07-14 01:02:29.895568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.620 [2024-07-14 01:02:29.895632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.620 [2024-07-14 01:02:29.895698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:40.620 [2024-07-14 01:02:29.895700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.620 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:40.620 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:40.620 01:02:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:40.620 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:40.620 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.620 01:02:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.620 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:40.620 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.620 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.879 [2024-07-14 01:02:30.036561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.879 Malloc0 00:16:40.879 [2024-07-14 01:02:30.096198] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1116365 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1116365 /var/tmp/bdevperf.sock 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1116365 ']' 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:40.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:40.879 { 00:16:40.879 "params": { 00:16:40.879 "name": "Nvme$subsystem", 00:16:40.879 "trtype": "$TEST_TRANSPORT", 00:16:40.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:40.879 "adrfam": "ipv4", 00:16:40.879 "trsvcid": "$NVMF_PORT", 00:16:40.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:40.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:40.879 "hdgst": ${hdgst:-false}, 00:16:40.879 "ddgst": ${ddgst:-false} 00:16:40.879 }, 00:16:40.879 "method": "bdev_nvme_attach_controller" 00:16:40.879 } 00:16:40.879 EOF 00:16:40.879 )") 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:40.879 01:02:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:40.879 "params": { 00:16:40.879 "name": "Nvme0", 00:16:40.879 "trtype": "tcp", 00:16:40.879 "traddr": "10.0.0.2", 00:16:40.879 "adrfam": "ipv4", 00:16:40.879 "trsvcid": "4420", 00:16:40.879 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:40.879 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:40.879 "hdgst": false, 00:16:40.879 "ddgst": false 00:16:40.879 }, 00:16:40.879 "method": "bdev_nvme_attach_controller" 00:16:40.879 }' 00:16:40.879 [2024-07-14 01:02:30.168207] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:40.879 [2024-07-14 01:02:30.168313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116365 ] 00:16:40.879 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.879 [2024-07-14 01:02:30.231449] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.139 [2024-07-14 01:02:30.318565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.398 Running I/O for 10 seconds... 00:16:41.398 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.398 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:41.398 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:41.398 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.398 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.398 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.398 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:41.398 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:41.399 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:41.399 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:41.399 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:41.399 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:41.399 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:41.399 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:41.399 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:41.399 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:41.399 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.399 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.399 01:02:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.399 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=65 00:16:41.399 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:16:41.399 01:02:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:41.660 01:02:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:41.660 01:02:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:41.660 01:02:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:41.660 01:02:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:41.660 01:02:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.660 01:02:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.660 01:02:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.660 01:02:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=449 00:16:41.660 01:02:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 449 -ge 100 ']' 00:16:41.660 01:02:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:41.660 01:02:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:41.660 01:02:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:41.660 01:02:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:41.660 01:02:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.660 01:02:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.660 [2024-07-14 01:02:31.047301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.047995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.048007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.048019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.048031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.048043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.660 [2024-07-14 01:02:31.048056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.661 [2024-07-14 01:02:31.048068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.661 [2024-07-14 01:02:31.048082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.661 [2024-07-14 01:02:31.048094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.661 [2024-07-14 01:02:31.048106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.661 [2024-07-14 01:02:31.048118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.661 [2024-07-14 01:02:31.048131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.661 [2024-07-14 01:02:31.048143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.661 [2024-07-14 01:02:31.048155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.661 [2024-07-14 01:02:31.048175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1136e20 is same with the state(5) to be set 00:16:41.661 [2024-07-14 01:02:31.048335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.048377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.048413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.048430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.048449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.048472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.048491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.048506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.048522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.048537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.048554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.048569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.048586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.048601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.048618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.048633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.048650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.048665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.048682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.048697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.048717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.048733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.048750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.048766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.048783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.048798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.048815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.048831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.048848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.048862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.048892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.048915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.048932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.048947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.048964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.048980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.048996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.049011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.049028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.049043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.049060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.049075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.049092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.049109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.049126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.049141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.049168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.049184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.049200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.049216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.049249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.049264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.049280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.049295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.049311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.049329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.049346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.049362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.049379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.049393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.049409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.049424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.049440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.049455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.049471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.049486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.049501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.049517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.049533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.661 [2024-07-14 01:02:31.049548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.661 [2024-07-14 01:02:31.049564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.049580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.049595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.049610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.049626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.049641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.049657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.049671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.049686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.049701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.049722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.049737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.049752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.049766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.049783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.049798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.049814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.049828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.049844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.049882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.049900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.049918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.049934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.049949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.049965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.049980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.049997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.050012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.050043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.050075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.050105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.050140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.050198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.050228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.050261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.050291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.050322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.050353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.050382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.050412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.050441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.050471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.662 [2024-07-14 01:02:31.050502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abf420 is same with the state(5) to be set 00:16:41.662 [2024-07-14 01:02:31.050598] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1abf420 was disconnected and freed. reset controller. 00:16:41.662 [2024-07-14 01:02:31.050682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.662 [2024-07-14 01:02:31.050705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.662 [2024-07-14 01:02:31.050737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.662 [2024-07-14 01:02:31.050766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.662 [2024-07-14 01:02:31.050794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.662 [2024-07-14 01:02:31.050810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5000 is same with the state(5) to be set 00:16:41.662 01:02:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.662 01:02:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:41.662 [2024-07-14 01:02:31.051987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:41.662 01:02:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.662 01:02:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.662 task offset: 57344 on job bdev=Nvme0n1 fails 00:16:41.662 00:16:41.662 Latency(us) 00:16:41.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.662 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:41.662 Job: Nvme0n1 ended in about 0.39 seconds with error 00:16:41.662 Verification LBA range: start 0x0 length 0x400 00:16:41.662 Nvme0n1 : 0.39 1148.00 71.75 164.00 0.00 47440.73 10534.31 42525.58 00:16:41.662 =================================================================================================================== 00:16:41.662 Total : 1148.00 71.75 164.00 0.00 47440.73 10534.31 42525.58 00:16:41.662 [2024-07-14 01:02:31.054084] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:41.662 [2024-07-14 01:02:31.054112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac5000 (9): Bad file descriptor 00:16:41.662 01:02:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.662 01:02:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:41.662 [2024-07-14 01:02:31.062878] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:43.037 01:02:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1116365 00:16:43.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1116365) - No such process 00:16:43.037 01:02:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:43.037 01:02:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:43.037 01:02:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:43.037 01:02:32 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:43.037 01:02:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:43.037 01:02:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:43.037 01:02:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:43.037 01:02:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:43.037 { 00:16:43.037 "params": { 00:16:43.037 "name": "Nvme$subsystem", 00:16:43.037 "trtype": "$TEST_TRANSPORT", 00:16:43.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:43.037 "adrfam": "ipv4", 00:16:43.037 "trsvcid": "$NVMF_PORT", 00:16:43.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:43.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:43.037 "hdgst": ${hdgst:-false}, 00:16:43.037 "ddgst": ${ddgst:-false} 00:16:43.037 }, 00:16:43.037 "method": "bdev_nvme_attach_controller" 00:16:43.037 } 00:16:43.037 EOF 00:16:43.037 )") 00:16:43.037 01:02:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:43.037 01:02:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:43.037 01:02:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:43.037 01:02:32 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:43.037 "params": { 00:16:43.037 "name": "Nvme0", 00:16:43.037 "trtype": "tcp", 00:16:43.037 "traddr": "10.0.0.2", 00:16:43.037 "adrfam": "ipv4", 00:16:43.037 "trsvcid": "4420", 00:16:43.037 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:43.037 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:43.037 "hdgst": false, 00:16:43.037 "ddgst": false 00:16:43.037 }, 00:16:43.037 "method": "bdev_nvme_attach_controller" 00:16:43.037 }' 00:16:43.037 [2024-07-14 01:02:32.107211] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:43.037 [2024-07-14 01:02:32.107318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116638 ] 00:16:43.037 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.037 [2024-07-14 01:02:32.168715] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.037 [2024-07-14 01:02:32.257023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.295 Running I/O for 1 seconds... 00:16:44.230 00:16:44.230 Latency(us) 00:16:44.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.230 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:44.230 Verification LBA range: start 0x0 length 0x400 00:16:44.230 Nvme0n1 : 1.00 1146.41 71.65 0.00 0.00 55040.88 13107.20 46020.84 00:16:44.230 =================================================================================================================== 00:16:44.230 Total : 1146.41 71.65 0.00 0.00 55040.88 13107.20 46020.84 00:16:44.488 01:02:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:44.488 01:02:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:44.488 01:02:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:44.488 01:02:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:44.488 01:02:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:44.488 01:02:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:44.488 01:02:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:44.488 01:02:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:44.488 01:02:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:44.488 01:02:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:44.488 01:02:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:44.489 rmmod nvme_tcp 00:16:44.489 rmmod nvme_fabrics 00:16:44.489 rmmod nvme_keyring 00:16:44.489 01:02:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:44.489 01:02:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:44.489 01:02:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:44.489 01:02:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1116309 ']' 00:16:44.489 01:02:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1116309 00:16:44.489 01:02:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1116309 ']' 00:16:44.489 01:02:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1116309 00:16:44.489 01:02:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:16:44.489 01:02:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:44.489 01:02:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1116309 00:16:44.747 01:02:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:44.747 01:02:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:44.747 01:02:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1116309' 00:16:44.747 killing process with pid 1116309 00:16:44.747 01:02:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1116309 00:16:44.747 01:02:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1116309 00:16:44.747 [2024-07-14 01:02:34.154548] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:45.007 01:02:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:45.007 01:02:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:45.007 01:02:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:45.007 01:02:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:45.007 01:02:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:45.007 01:02:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.007 01:02:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.007 01:02:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.910 01:02:36 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:46.910 01:02:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:46.910 00:16:46.910 real 0m8.872s 00:16:46.910 user 0m20.243s 00:16:46.910 sys 0m2.678s 00:16:46.910 01:02:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:46.910 01:02:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:46.910 ************************************ 00:16:46.910 END TEST nvmf_host_management 00:16:46.910 ************************************ 00:16:46.910 01:02:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:46.910 01:02:36 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:46.910 01:02:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:46.910 01:02:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:46.910 01:02:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:46.910 ************************************ 00:16:46.910 START TEST nvmf_lvol 00:16:46.910 ************************************ 00:16:46.910 01:02:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:47.168 * Looking for test storage... 00:16:47.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.168 01:02:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:47.169 01:02:36 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:49.071 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:49.071 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:49.071 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:49.071 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:49.071 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:49.071 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:49.071 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:49.072 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:49.072 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:49.072 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:49.072 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:49.072 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:49.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:16:49.073 00:16:49.073 --- 10.0.0.2 ping statistics --- 00:16:49.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.073 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:16:49.073 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:49.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:16:49.073 00:16:49.073 --- 10.0.0.1 ping statistics --- 00:16:49.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.073 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:16:49.073 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.073 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:49.073 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:49.073 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.073 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:49.073 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:49.073 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.073 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:49.073 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:49.331 01:02:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:49.331 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:49.331 01:02:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:49.331 01:02:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:49.331 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1118830 00:16:49.331 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:49.331 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1118830 00:16:49.331 01:02:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1118830 ']' 00:16:49.331 01:02:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.331 01:02:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:49.331 01:02:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.331 01:02:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:49.331 01:02:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:49.331 [2024-07-14 01:02:38.540149] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:49.331 [2024-07-14 01:02:38.540255] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.331 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.331 [2024-07-14 01:02:38.609575] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:49.331 [2024-07-14 01:02:38.699346] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.331 [2024-07-14 01:02:38.699410] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.331 [2024-07-14 01:02:38.699436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.331 [2024-07-14 01:02:38.699450] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.331 [2024-07-14 01:02:38.699462] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.331 [2024-07-14 01:02:38.699555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.331 [2024-07-14 01:02:38.699607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.331 [2024-07-14 01:02:38.699611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.589 01:02:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.589 01:02:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:16:49.589 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:49.589 01:02:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:49.589 01:02:38 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:49.589 01:02:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.589 01:02:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:49.847 [2024-07-14 01:02:39.060332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.847 01:02:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:50.106 01:02:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:50.106 01:02:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:50.364 01:02:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:50.364 01:02:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:50.622 01:02:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:50.880 01:02:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6453b001-7cf5-48fd-80fd-da08e84ec586 00:16:50.880 01:02:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6453b001-7cf5-48fd-80fd-da08e84ec586 lvol 20 00:16:51.138 01:02:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a3bc6b52-280e-4476-a7cb-b38c925ee872 00:16:51.138 01:02:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:51.395 01:02:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a3bc6b52-280e-4476-a7cb-b38c925ee872 00:16:51.654 01:02:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:51.912 [2024-07-14 01:02:41.116837] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.912 01:02:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:52.169 01:02:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1119140 00:16:52.169 01:02:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:52.170 01:02:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:52.170 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.174 01:02:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a3bc6b52-280e-4476-a7cb-b38c925ee872 MY_SNAPSHOT 00:16:53.432 01:02:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3c55d58f-afaf-4f07-bc47-d1005517d61e 00:16:53.432 01:02:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a3bc6b52-280e-4476-a7cb-b38c925ee872 30 00:16:53.690 01:02:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3c55d58f-afaf-4f07-bc47-d1005517d61e MY_CLONE 00:16:53.948 01:02:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9520bb95-aa37-478c-8f83-b437e325c77a 00:16:53.948 01:02:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9520bb95-aa37-478c-8f83-b437e325c77a 00:16:54.517 01:02:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1119140 00:17:02.630 Initializing NVMe Controllers 00:17:02.630 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:02.630 Controller IO queue size 128, less than required. 00:17:02.630 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:02.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:02.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:02.630 Initialization complete. Launching workers. 00:17:02.630 ======================================================== 00:17:02.630 Latency(us) 00:17:02.630 Device Information : IOPS MiB/s Average min max 00:17:02.630 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10807.00 42.21 11847.41 1484.75 72490.48 00:17:02.630 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9947.50 38.86 12874.82 2005.57 68966.05 00:17:02.630 ======================================================== 00:17:02.630 Total : 20754.50 81.07 12339.84 1484.75 72490.48 00:17:02.630 00:17:02.630 01:02:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:02.630 01:02:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a3bc6b52-280e-4476-a7cb-b38c925ee872 00:17:03.197 01:02:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6453b001-7cf5-48fd-80fd-da08e84ec586 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:03.455 rmmod nvme_tcp 00:17:03.455 rmmod nvme_fabrics 00:17:03.455 rmmod nvme_keyring 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1118830 ']' 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1118830 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1118830 ']' 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1118830 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1118830 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1118830' 00:17:03.455 killing process with pid 1118830 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1118830 00:17:03.455 01:02:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1118830 00:17:03.714 01:02:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:03.714 01:02:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:03.714 01:02:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:03.714 01:02:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:03.714 01:02:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:03.714 01:02:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.714 01:02:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.714 01:02:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.628 01:02:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:05.628 00:17:05.628 real 0m18.756s 00:17:05.628 user 1m4.226s 00:17:05.628 sys 0m5.505s 00:17:05.628 01:02:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:05.628 01:02:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:05.628 ************************************ 00:17:05.628 END TEST nvmf_lvol 00:17:05.628 ************************************ 00:17:05.886 01:02:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:05.886 01:02:55 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:05.886 01:02:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:05.886 01:02:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:05.886 01:02:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:05.886 ************************************ 00:17:05.886 START TEST nvmf_lvs_grow 00:17:05.886 ************************************ 00:17:05.886 01:02:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:05.886 * Looking for test storage... 00:17:05.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:05.886 01:02:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:05.886 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:05.886 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.886 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.886 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.886 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.886 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.886 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.886 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.886 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.886 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:05.887 01:02:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:07.788 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:07.789 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:07.789 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:07.789 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:07.789 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:07.789 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.047 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.047 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.047 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:08.047 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.047 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.047 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.047 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:08.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:17:08.047 00:17:08.047 --- 10.0.0.2 ping statistics --- 00:17:08.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.047 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:17:08.047 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:17:08.047 00:17:08.047 --- 10.0.0.1 ping statistics --- 00:17:08.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.047 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:17:08.047 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.047 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1122399 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1122399 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1122399 ']' 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.048 01:02:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:08.048 [2024-07-14 01:02:57.342466] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:08.048 [2024-07-14 01:02:57.342547] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.048 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.048 [2024-07-14 01:02:57.407596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.306 [2024-07-14 01:02:57.502600] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.306 [2024-07-14 01:02:57.502671] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.306 [2024-07-14 01:02:57.502713] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.306 [2024-07-14 01:02:57.502727] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.306 [2024-07-14 01:02:57.502738] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.306 [2024-07-14 01:02:57.502776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.306 01:02:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.306 01:02:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:17:08.306 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:08.306 01:02:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:08.306 01:02:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:08.306 01:02:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.306 01:02:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:08.564 [2024-07-14 01:02:57.864608] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.564 01:02:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:08.564 01:02:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:08.564 01:02:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.564 01:02:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:08.564 ************************************ 00:17:08.564 START TEST lvs_grow_clean 00:17:08.564 ************************************ 00:17:08.564 01:02:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:17:08.564 01:02:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:08.564 01:02:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:08.564 01:02:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:08.564 01:02:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:08.564 01:02:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:08.564 01:02:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:08.564 01:02:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:08.564 01:02:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:08.564 01:02:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:08.821 01:02:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:08.821 01:02:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:09.079 01:02:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6f19708e-42fd-4428-96ac-2edebfccf9c6 00:17:09.079 01:02:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f19708e-42fd-4428-96ac-2edebfccf9c6 00:17:09.079 01:02:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:09.337 01:02:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:09.337 01:02:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:09.337 01:02:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6f19708e-42fd-4428-96ac-2edebfccf9c6 lvol 150 00:17:09.594 01:02:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=dfb2be59-a738-45c2-b477-57f193a46b62 00:17:09.594 01:02:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:09.594 01:02:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:09.852 [2024-07-14 01:02:59.163065] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:09.852 [2024-07-14 01:02:59.163156] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:09.852 true 00:17:09.852 01:02:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f19708e-42fd-4428-96ac-2edebfccf9c6 00:17:09.852 01:02:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:10.109 01:02:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:10.109 01:02:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:10.366 01:02:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dfb2be59-a738-45c2-b477-57f193a46b62 00:17:10.625 01:02:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:10.882 [2024-07-14 01:03:00.166104] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.882 01:03:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:11.140 01:03:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1122869 00:17:11.140 01:03:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:11.140 01:03:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:11.140 01:03:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1122869 /var/tmp/bdevperf.sock 00:17:11.140 01:03:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1122869 ']' 00:17:11.140 01:03:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.140 01:03:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.140 01:03:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.140 01:03:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.140 01:03:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:11.140 [2024-07-14 01:03:00.471595] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:11.140 [2024-07-14 01:03:00.471668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1122869 ] 00:17:11.140 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.140 [2024-07-14 01:03:00.531745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.398 [2024-07-14 01:03:00.624319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.398 01:03:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.398 01:03:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:17:11.398 01:03:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:11.963 Nvme0n1 00:17:11.963 01:03:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:12.219 [ 00:17:12.219 { 00:17:12.219 "name": "Nvme0n1", 00:17:12.219 "aliases": [ 00:17:12.219 "dfb2be59-a738-45c2-b477-57f193a46b62" 00:17:12.219 ], 00:17:12.219 "product_name": "NVMe disk", 00:17:12.219 "block_size": 4096, 00:17:12.219 "num_blocks": 38912, 00:17:12.219 "uuid": "dfb2be59-a738-45c2-b477-57f193a46b62", 00:17:12.219 "assigned_rate_limits": { 00:17:12.219 "rw_ios_per_sec": 0, 00:17:12.219 "rw_mbytes_per_sec": 0, 00:17:12.219 "r_mbytes_per_sec": 0, 00:17:12.219 "w_mbytes_per_sec": 0 00:17:12.219 }, 00:17:12.219 "claimed": false, 00:17:12.219 "zoned": false, 00:17:12.219 "supported_io_types": { 00:17:12.219 "read": true, 00:17:12.219 "write": true, 00:17:12.219 "unmap": true, 00:17:12.219 "flush": true, 00:17:12.219 "reset": true, 00:17:12.219 "nvme_admin": true, 00:17:12.219 "nvme_io": true, 00:17:12.219 "nvme_io_md": false, 00:17:12.219 "write_zeroes": true, 00:17:12.219 "zcopy": false, 00:17:12.219 "get_zone_info": false, 00:17:12.219 "zone_management": false, 00:17:12.219 "zone_append": false, 00:17:12.219 "compare": true, 00:17:12.219 "compare_and_write": true, 00:17:12.219 "abort": true, 00:17:12.219 "seek_hole": false, 00:17:12.219 "seek_data": false, 00:17:12.219 "copy": true, 00:17:12.219 "nvme_iov_md": false 00:17:12.219 }, 00:17:12.219 "memory_domains": [ 00:17:12.219 { 00:17:12.219 "dma_device_id": "system", 00:17:12.219 "dma_device_type": 1 00:17:12.219 } 00:17:12.219 ], 00:17:12.219 "driver_specific": { 00:17:12.219 "nvme": [ 00:17:12.219 { 00:17:12.219 "trid": { 00:17:12.219 "trtype": "TCP", 00:17:12.219 "adrfam": "IPv4", 00:17:12.219 "traddr": "10.0.0.2", 00:17:12.219 "trsvcid": "4420", 00:17:12.219 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:12.219 }, 00:17:12.219 "ctrlr_data": { 00:17:12.219 "cntlid": 1, 00:17:12.219 "vendor_id": "0x8086", 00:17:12.219 "model_number": "SPDK bdev Controller", 00:17:12.219 "serial_number": "SPDK0", 00:17:12.219 "firmware_revision": "24.09", 00:17:12.219 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:12.219 "oacs": { 00:17:12.219 "security": 0, 00:17:12.219 "format": 0, 00:17:12.219 "firmware": 0, 00:17:12.219 "ns_manage": 0 00:17:12.219 }, 00:17:12.219 "multi_ctrlr": true, 00:17:12.219 "ana_reporting": false 00:17:12.219 }, 00:17:12.219 "vs": { 00:17:12.219 "nvme_version": "1.3" 00:17:12.219 }, 00:17:12.219 "ns_data": { 00:17:12.219 "id": 1, 00:17:12.219 "can_share": true 00:17:12.219 } 00:17:12.219 } 00:17:12.219 ], 00:17:12.219 "mp_policy": "active_passive" 00:17:12.219 } 00:17:12.219 } 00:17:12.219 ] 00:17:12.219 01:03:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1123058 00:17:12.219 01:03:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:12.219 01:03:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:12.219 Running I/O for 10 seconds... 00:17:13.594 Latency(us) 00:17:13.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.594 Nvme0n1 : 1.00 12885.00 50.33 0.00 0.00 0.00 0.00 0.00 00:17:13.594 =================================================================================================================== 00:17:13.594 Total : 12885.00 50.33 0.00 0.00 0.00 0.00 0.00 00:17:13.594 00:17:14.161 01:03:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6f19708e-42fd-4428-96ac-2edebfccf9c6 00:17:14.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.419 Nvme0n1 : 2.00 13038.50 50.93 0.00 0.00 0.00 0.00 0.00 00:17:14.419 =================================================================================================================== 00:17:14.419 Total : 13038.50 50.93 0.00 0.00 0.00 0.00 0.00 00:17:14.419 00:17:14.419 true 00:17:14.419 01:03:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f19708e-42fd-4428-96ac-2edebfccf9c6 00:17:14.419 01:03:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:14.677 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:14.677 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:14.677 01:03:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1123058 00:17:15.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.241 Nvme0n1 : 3.00 13105.67 51.19 0.00 0.00 0.00 0.00 0.00 00:17:15.241 =================================================================================================================== 00:17:15.241 Total : 13105.67 51.19 0.00 0.00 0.00 0.00 0.00 00:17:15.241 00:17:16.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.616 Nvme0n1 : 4.00 13153.25 51.38 0.00 0.00 0.00 0.00 0.00 00:17:16.616 =================================================================================================================== 00:17:16.616 Total : 13153.25 51.38 0.00 0.00 0.00 0.00 0.00 00:17:16.616 00:17:17.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.550 Nvme0n1 : 5.00 13204.20 51.58 0.00 0.00 0.00 0.00 0.00 00:17:17.550 =================================================================================================================== 00:17:17.550 Total : 13204.20 51.58 0.00 0.00 0.00 0.00 0.00 00:17:17.550 00:17:18.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.485 Nvme0n1 : 6.00 13242.17 51.73 0.00 0.00 0.00 0.00 0.00 00:17:18.485 =================================================================================================================== 00:17:18.485 Total : 13242.17 51.73 0.00 0.00 0.00 0.00 0.00 00:17:18.485 00:17:19.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.419 Nvme0n1 : 7.00 13277.29 51.86 0.00 0.00 0.00 0.00 0.00 00:17:19.419 =================================================================================================================== 00:17:19.419 Total : 13277.29 51.86 0.00 0.00 0.00 0.00 0.00 00:17:19.419 00:17:20.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.354 Nvme0n1 : 8.00 13305.62 51.98 0.00 0.00 0.00 0.00 0.00 00:17:20.354 =================================================================================================================== 00:17:20.354 Total : 13305.62 51.98 0.00 0.00 0.00 0.00 0.00 00:17:20.354 00:17:21.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.289 Nvme0n1 : 9.00 13328.56 52.06 0.00 0.00 0.00 0.00 0.00 00:17:21.289 =================================================================================================================== 00:17:21.289 Total : 13328.56 52.06 0.00 0.00 0.00 0.00 0.00 00:17:21.289 00:17:22.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.222 Nvme0n1 : 10.00 13357.30 52.18 0.00 0.00 0.00 0.00 0.00 00:17:22.222 =================================================================================================================== 00:17:22.222 Total : 13357.30 52.18 0.00 0.00 0.00 0.00 0.00 00:17:22.222 00:17:22.222 00:17:22.222 Latency(us) 00:17:22.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.223 Nvme0n1 : 10.01 13358.21 52.18 0.00 0.00 9573.72 7573.05 18738.44 00:17:22.223 =================================================================================================================== 00:17:22.223 Total : 13358.21 52.18 0.00 0.00 9573.72 7573.05 18738.44 00:17:22.223 0 00:17:22.223 01:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1122869 00:17:22.223 01:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1122869 ']' 00:17:22.223 01:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1122869 00:17:22.223 01:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:17:22.480 01:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:22.480 01:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1122869 00:17:22.480 01:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:22.480 01:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:22.480 01:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1122869' 00:17:22.480 killing process with pid 1122869 00:17:22.480 01:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1122869 00:17:22.480 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.480 00:17:22.480 Latency(us) 00:17:22.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.480 =================================================================================================================== 00:17:22.480 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:22.480 01:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1122869 00:17:22.480 01:03:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:23.045 01:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:23.302 01:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f19708e-42fd-4428-96ac-2edebfccf9c6 00:17:23.302 01:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:23.559 01:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:23.559 01:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:23.559 01:03:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:23.817 [2024-07-14 01:03:12.977640] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:23.817 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f19708e-42fd-4428-96ac-2edebfccf9c6 00:17:23.817 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:23.817 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f19708e-42fd-4428-96ac-2edebfccf9c6 00:17:23.817 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.817 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.817 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.817 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.817 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.817 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.817 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:23.817 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:23.817 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f19708e-42fd-4428-96ac-2edebfccf9c6 00:17:24.073 request: 00:17:24.073 { 00:17:24.073 "uuid": "6f19708e-42fd-4428-96ac-2edebfccf9c6", 00:17:24.073 "method": "bdev_lvol_get_lvstores", 00:17:24.073 "req_id": 1 00:17:24.073 } 00:17:24.073 Got JSON-RPC error response 00:17:24.073 response: 00:17:24.073 { 00:17:24.073 "code": -19, 00:17:24.073 "message": "No such device" 00:17:24.073 } 00:17:24.073 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:24.073 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:24.073 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:24.073 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:24.073 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:24.331 aio_bdev 00:17:24.331 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dfb2be59-a738-45c2-b477-57f193a46b62 00:17:24.331 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=dfb2be59-a738-45c2-b477-57f193a46b62 00:17:24.331 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:24.331 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:17:24.331 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:24.331 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:24.331 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:24.588 01:03:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dfb2be59-a738-45c2-b477-57f193a46b62 -t 2000 00:17:24.845 [ 00:17:24.846 { 00:17:24.846 "name": "dfb2be59-a738-45c2-b477-57f193a46b62", 00:17:24.846 "aliases": [ 00:17:24.846 "lvs/lvol" 00:17:24.846 ], 00:17:24.846 "product_name": "Logical Volume", 00:17:24.846 "block_size": 4096, 00:17:24.846 "num_blocks": 38912, 00:17:24.846 "uuid": "dfb2be59-a738-45c2-b477-57f193a46b62", 00:17:24.846 "assigned_rate_limits": { 00:17:24.846 "rw_ios_per_sec": 0, 00:17:24.846 "rw_mbytes_per_sec": 0, 00:17:24.846 "r_mbytes_per_sec": 0, 00:17:24.846 "w_mbytes_per_sec": 0 00:17:24.846 }, 00:17:24.846 "claimed": false, 00:17:24.846 "zoned": false, 00:17:24.846 "supported_io_types": { 00:17:24.846 "read": true, 00:17:24.846 "write": true, 00:17:24.846 "unmap": true, 00:17:24.846 "flush": false, 00:17:24.846 "reset": true, 00:17:24.846 "nvme_admin": false, 00:17:24.846 "nvme_io": false, 00:17:24.846 "nvme_io_md": false, 00:17:24.846 "write_zeroes": true, 00:17:24.846 "zcopy": false, 00:17:24.846 "get_zone_info": false, 00:17:24.846 "zone_management": false, 00:17:24.846 "zone_append": false, 00:17:24.846 "compare": false, 00:17:24.846 "compare_and_write": false, 00:17:24.846 "abort": false, 00:17:24.846 "seek_hole": true, 00:17:24.846 "seek_data": true, 00:17:24.846 "copy": false, 00:17:24.846 "nvme_iov_md": false 00:17:24.846 }, 00:17:24.846 "driver_specific": { 00:17:24.846 "lvol": { 00:17:24.846 "lvol_store_uuid": "6f19708e-42fd-4428-96ac-2edebfccf9c6", 00:17:24.846 "base_bdev": "aio_bdev", 00:17:24.846 "thin_provision": false, 00:17:24.846 "num_allocated_clusters": 38, 00:17:24.846 "snapshot": false, 00:17:24.846 "clone": false, 00:17:24.846 "esnap_clone": false 00:17:24.846 } 00:17:24.846 } 00:17:24.846 } 00:17:24.846 ] 00:17:24.846 01:03:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:17:24.846 01:03:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:24.846 01:03:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f19708e-42fd-4428-96ac-2edebfccf9c6 00:17:25.104 01:03:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:25.104 01:03:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f19708e-42fd-4428-96ac-2edebfccf9c6 00:17:25.104 01:03:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:25.361 01:03:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:25.361 01:03:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dfb2be59-a738-45c2-b477-57f193a46b62 00:17:25.618 01:03:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6f19708e-42fd-4428-96ac-2edebfccf9c6 00:17:25.875 01:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:26.132 01:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:26.132 00:17:26.132 real 0m17.519s 00:17:26.132 user 0m16.866s 00:17:26.132 sys 0m1.967s 00:17:26.132 01:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:26.132 01:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:26.132 ************************************ 00:17:26.132 END TEST lvs_grow_clean 00:17:26.132 ************************************ 00:17:26.132 01:03:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:26.132 01:03:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:26.132 01:03:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:26.132 01:03:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:26.132 01:03:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:26.132 ************************************ 00:17:26.132 START TEST lvs_grow_dirty 00:17:26.132 ************************************ 00:17:26.132 01:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:17:26.132 01:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:26.132 01:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:26.132 01:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:26.132 01:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:26.132 01:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:26.132 01:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:26.132 01:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:26.132 01:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:26.133 01:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:26.698 01:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:26.698 01:03:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:26.698 01:03:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3aeac5f9-f13a-4efa-97c6-1f6860319b31 00:17:26.698 01:03:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3aeac5f9-f13a-4efa-97c6-1f6860319b31 00:17:26.698 01:03:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:26.989 01:03:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:26.989 01:03:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:26.989 01:03:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3aeac5f9-f13a-4efa-97c6-1f6860319b31 lvol 150 00:17:27.246 01:03:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=05c993cc-6b1b-4062-8076-4c823d580223 00:17:27.246 01:03:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:27.246 01:03:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:27.528 [2024-07-14 01:03:16.905433] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:27.528 [2024-07-14 01:03:16.905522] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:27.528 true 00:17:27.528 01:03:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3aeac5f9-f13a-4efa-97c6-1f6860319b31 00:17:27.528 01:03:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:27.786 01:03:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:27.786 01:03:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:28.045 01:03:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 05c993cc-6b1b-4062-8076-4c823d580223 00:17:28.304 01:03:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:28.563 [2024-07-14 01:03:17.900450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.563 01:03:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:28.821 01:03:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1125613 00:17:28.821 01:03:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:28.821 01:03:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:28.821 01:03:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1125613 /var/tmp/bdevperf.sock 00:17:28.821 01:03:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1125613 ']' 00:17:28.821 01:03:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:28.821 01:03:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:28.821 01:03:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:28.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:28.821 01:03:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:28.821 01:03:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:29.079 [2024-07-14 01:03:18.238418] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:29.079 [2024-07-14 01:03:18.238489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125613 ] 00:17:29.079 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.079 [2024-07-14 01:03:18.299024] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.079 [2024-07-14 01:03:18.391064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.338 01:03:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.338 01:03:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:29.338 01:03:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:29.596 Nvme0n1 00:17:29.596 01:03:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:29.855 [ 00:17:29.855 { 00:17:29.855 "name": "Nvme0n1", 00:17:29.855 "aliases": [ 00:17:29.855 "05c993cc-6b1b-4062-8076-4c823d580223" 00:17:29.855 ], 00:17:29.855 "product_name": "NVMe disk", 00:17:29.855 "block_size": 4096, 00:17:29.855 "num_blocks": 38912, 00:17:29.855 "uuid": "05c993cc-6b1b-4062-8076-4c823d580223", 00:17:29.855 "assigned_rate_limits": { 00:17:29.855 "rw_ios_per_sec": 0, 00:17:29.855 "rw_mbytes_per_sec": 0, 00:17:29.855 "r_mbytes_per_sec": 0, 00:17:29.855 "w_mbytes_per_sec": 0 00:17:29.855 }, 00:17:29.855 "claimed": false, 00:17:29.855 "zoned": false, 00:17:29.855 "supported_io_types": { 00:17:29.855 "read": true, 00:17:29.855 "write": true, 00:17:29.855 "unmap": true, 00:17:29.855 "flush": true, 00:17:29.855 "reset": true, 00:17:29.855 "nvme_admin": true, 00:17:29.855 "nvme_io": true, 00:17:29.855 "nvme_io_md": false, 00:17:29.855 "write_zeroes": true, 00:17:29.855 "zcopy": false, 00:17:29.855 "get_zone_info": false, 00:17:29.855 "zone_management": false, 00:17:29.855 "zone_append": false, 00:17:29.855 "compare": true, 00:17:29.855 "compare_and_write": true, 00:17:29.855 "abort": true, 00:17:29.855 "seek_hole": false, 00:17:29.855 "seek_data": false, 00:17:29.855 "copy": true, 00:17:29.855 "nvme_iov_md": false 00:17:29.855 }, 00:17:29.855 "memory_domains": [ 00:17:29.855 { 00:17:29.855 "dma_device_id": "system", 00:17:29.855 "dma_device_type": 1 00:17:29.855 } 00:17:29.855 ], 00:17:29.855 "driver_specific": { 00:17:29.855 "nvme": [ 00:17:29.855 { 00:17:29.855 "trid": { 00:17:29.855 "trtype": "TCP", 00:17:29.855 "adrfam": "IPv4", 00:17:29.855 "traddr": "10.0.0.2", 00:17:29.855 "trsvcid": "4420", 00:17:29.855 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:29.855 }, 00:17:29.855 "ctrlr_data": { 00:17:29.855 "cntlid": 1, 00:17:29.855 "vendor_id": "0x8086", 00:17:29.855 "model_number": "SPDK bdev Controller", 00:17:29.855 "serial_number": "SPDK0", 00:17:29.855 "firmware_revision": "24.09", 00:17:29.855 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:29.855 "oacs": { 00:17:29.855 "security": 0, 00:17:29.855 "format": 0, 00:17:29.855 "firmware": 0, 00:17:29.855 "ns_manage": 0 00:17:29.855 }, 00:17:29.855 "multi_ctrlr": true, 00:17:29.855 "ana_reporting": false 00:17:29.855 }, 00:17:29.855 "vs": { 00:17:29.855 "nvme_version": "1.3" 00:17:29.855 }, 00:17:29.855 "ns_data": { 00:17:29.855 "id": 1, 00:17:29.855 "can_share": true 00:17:29.855 } 00:17:29.855 } 00:17:29.855 ], 00:17:29.855 "mp_policy": "active_passive" 00:17:29.855 } 00:17:29.855 } 00:17:29.855 ] 00:17:29.855 01:03:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1125745 00:17:29.855 01:03:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:29.855 01:03:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:29.855 Running I/O for 10 seconds... 00:17:30.789 Latency(us) 00:17:30.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.790 Nvme0n1 : 1.00 13639.00 53.28 0.00 0.00 0.00 0.00 0.00 00:17:30.790 =================================================================================================================== 00:17:30.790 Total : 13639.00 53.28 0.00 0.00 0.00 0.00 0.00 00:17:30.790 00:17:31.724 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3aeac5f9-f13a-4efa-97c6-1f6860319b31 00:17:31.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.982 Nvme0n1 : 2.00 13827.50 54.01 0.00 0.00 0.00 0.00 0.00 00:17:31.982 =================================================================================================================== 00:17:31.982 Total : 13827.50 54.01 0.00 0.00 0.00 0.00 0.00 00:17:31.982 00:17:31.982 true 00:17:31.982 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3aeac5f9-f13a-4efa-97c6-1f6860319b31 00:17:31.982 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:32.241 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:32.241 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:32.241 01:03:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1125745 00:17:32.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.808 Nvme0n1 : 3.00 13976.00 54.59 0.00 0.00 0.00 0.00 0.00 00:17:32.808 =================================================================================================================== 00:17:32.808 Total : 13976.00 54.59 0.00 0.00 0.00 0.00 0.00 00:17:32.808 00:17:34.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.180 Nvme0n1 : 4.00 14081.75 55.01 0.00 0.00 0.00 0.00 0.00 00:17:34.180 =================================================================================================================== 00:17:34.180 Total : 14081.75 55.01 0.00 0.00 0.00 0.00 0.00 00:17:34.180 00:17:35.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.113 Nvme0n1 : 5.00 14119.80 55.16 0.00 0.00 0.00 0.00 0.00 00:17:35.113 =================================================================================================================== 00:17:35.113 Total : 14119.80 55.16 0.00 0.00 0.00 0.00 0.00 00:17:35.113 00:17:36.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.046 Nvme0n1 : 6.00 14177.17 55.38 0.00 0.00 0.00 0.00 0.00 00:17:36.046 =================================================================================================================== 00:17:36.046 Total : 14177.17 55.38 0.00 0.00 0.00 0.00 0.00 00:17:36.046 00:17:36.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.978 Nvme0n1 : 7.00 14208.86 55.50 0.00 0.00 0.00 0.00 0.00 00:17:36.978 =================================================================================================================== 00:17:36.978 Total : 14208.86 55.50 0.00 0.00 0.00 0.00 0.00 00:17:36.978 00:17:37.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.912 Nvme0n1 : 8.00 14248.88 55.66 0.00 0.00 0.00 0.00 0.00 00:17:37.912 =================================================================================================================== 00:17:37.912 Total : 14248.88 55.66 0.00 0.00 0.00 0.00 0.00 00:17:37.912 00:17:38.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:38.848 Nvme0n1 : 9.00 14265.67 55.73 0.00 0.00 0.00 0.00 0.00 00:17:38.848 =================================================================================================================== 00:17:38.848 Total : 14265.67 55.73 0.00 0.00 0.00 0.00 0.00 00:17:38.848 00:17:39.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.785 Nvme0n1 : 10.00 14279.00 55.78 0.00 0.00 0.00 0.00 0.00 00:17:39.785 =================================================================================================================== 00:17:39.785 Total : 14279.00 55.78 0.00 0.00 0.00 0.00 0.00 00:17:39.785 00:17:39.785 00:17:39.785 Latency(us) 00:17:39.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.785 Nvme0n1 : 10.01 14280.29 55.78 0.00 0.00 8957.11 5267.15 19709.35 00:17:39.785 =================================================================================================================== 00:17:39.785 Total : 14280.29 55.78 0.00 0.00 8957.11 5267.15 19709.35 00:17:39.785 0 00:17:39.785 01:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1125613 00:17:39.785 01:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1125613 ']' 00:17:39.785 01:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1125613 00:17:39.785 01:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:17:39.785 01:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:40.044 01:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1125613 00:17:40.044 01:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:40.044 01:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:40.044 01:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1125613' 00:17:40.044 killing process with pid 1125613 00:17:40.044 01:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1125613 00:17:40.044 Received shutdown signal, test time was about 10.000000 seconds 00:17:40.044 00:17:40.044 Latency(us) 00:17:40.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.044 =================================================================================================================== 00:17:40.044 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:40.044 01:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1125613 00:17:40.044 01:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:40.611 01:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:40.870 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3aeac5f9-f13a-4efa-97c6-1f6860319b31 00:17:40.870 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:40.870 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:40.870 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:40.870 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1122399 00:17:40.870 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1122399 00:17:41.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1122399 Killed "${NVMF_APP[@]}" "$@" 00:17:41.129 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:41.129 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:41.129 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:41.129 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:41.129 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:41.129 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1126956 00:17:41.129 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:41.129 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1126956 00:17:41.129 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1126956 ']' 00:17:41.129 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.129 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.129 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.129 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.129 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:41.129 [2024-07-14 01:03:30.361756] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:41.129 [2024-07-14 01:03:30.361832] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.129 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.129 [2024-07-14 01:03:30.426281] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.129 [2024-07-14 01:03:30.512192] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.129 [2024-07-14 01:03:30.512254] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.129 [2024-07-14 01:03:30.512268] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.129 [2024-07-14 01:03:30.512279] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.129 [2024-07-14 01:03:30.512288] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.129 [2024-07-14 01:03:30.512326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.425 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.425 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:41.425 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:41.425 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:41.425 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:41.425 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.425 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:41.707 [2024-07-14 01:03:30.924994] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:41.707 [2024-07-14 01:03:30.925146] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:41.707 [2024-07-14 01:03:30.925209] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:41.707 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:41.707 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 05c993cc-6b1b-4062-8076-4c823d580223 00:17:41.707 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=05c993cc-6b1b-4062-8076-4c823d580223 00:17:41.707 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:41.707 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:41.707 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:41.707 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:41.707 01:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:41.965 01:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 05c993cc-6b1b-4062-8076-4c823d580223 -t 2000 00:17:42.223 [ 00:17:42.223 { 00:17:42.223 "name": "05c993cc-6b1b-4062-8076-4c823d580223", 00:17:42.223 "aliases": [ 00:17:42.223 "lvs/lvol" 00:17:42.223 ], 00:17:42.223 "product_name": "Logical Volume", 00:17:42.223 "block_size": 4096, 00:17:42.223 "num_blocks": 38912, 00:17:42.223 "uuid": "05c993cc-6b1b-4062-8076-4c823d580223", 00:17:42.223 "assigned_rate_limits": { 00:17:42.223 "rw_ios_per_sec": 0, 00:17:42.223 "rw_mbytes_per_sec": 0, 00:17:42.223 "r_mbytes_per_sec": 0, 00:17:42.223 "w_mbytes_per_sec": 0 00:17:42.223 }, 00:17:42.223 "claimed": false, 00:17:42.223 "zoned": false, 00:17:42.223 "supported_io_types": { 00:17:42.223 "read": true, 00:17:42.223 "write": true, 00:17:42.223 "unmap": true, 00:17:42.223 "flush": false, 00:17:42.223 "reset": true, 00:17:42.223 "nvme_admin": false, 00:17:42.223 "nvme_io": false, 00:17:42.223 "nvme_io_md": false, 00:17:42.223 "write_zeroes": true, 00:17:42.223 "zcopy": false, 00:17:42.223 "get_zone_info": false, 00:17:42.223 "zone_management": false, 00:17:42.223 "zone_append": false, 00:17:42.223 "compare": false, 00:17:42.223 "compare_and_write": false, 00:17:42.223 "abort": false, 00:17:42.223 "seek_hole": true, 00:17:42.223 "seek_data": true, 00:17:42.223 "copy": false, 00:17:42.223 "nvme_iov_md": false 00:17:42.223 }, 00:17:42.223 "driver_specific": { 00:17:42.223 "lvol": { 00:17:42.223 "lvol_store_uuid": "3aeac5f9-f13a-4efa-97c6-1f6860319b31", 00:17:42.223 "base_bdev": "aio_bdev", 00:17:42.223 "thin_provision": false, 00:17:42.223 "num_allocated_clusters": 38, 00:17:42.223 "snapshot": false, 00:17:42.223 "clone": false, 00:17:42.223 "esnap_clone": false 00:17:42.223 } 00:17:42.223 } 00:17:42.223 } 00:17:42.223 ] 00:17:42.223 01:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:42.223 01:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3aeac5f9-f13a-4efa-97c6-1f6860319b31 00:17:42.223 01:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:42.481 01:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:42.481 01:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3aeac5f9-f13a-4efa-97c6-1f6860319b31 00:17:42.481 01:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:42.739 01:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:42.739 01:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:42.998 [2024-07-14 01:03:32.173938] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:42.998 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3aeac5f9-f13a-4efa-97c6-1f6860319b31 00:17:42.998 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:42.998 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3aeac5f9-f13a-4efa-97c6-1f6860319b31 00:17:42.998 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:42.998 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:42.998 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:42.998 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:42.998 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:42.998 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:42.998 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:42.998 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:42.998 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3aeac5f9-f13a-4efa-97c6-1f6860319b31 00:17:43.257 request: 00:17:43.257 { 00:17:43.257 "uuid": "3aeac5f9-f13a-4efa-97c6-1f6860319b31", 00:17:43.257 "method": "bdev_lvol_get_lvstores", 00:17:43.257 "req_id": 1 00:17:43.257 } 00:17:43.257 Got JSON-RPC error response 00:17:43.257 response: 00:17:43.257 { 00:17:43.257 "code": -19, 00:17:43.257 "message": "No such device" 00:17:43.257 } 00:17:43.257 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:43.257 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:43.257 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:43.257 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:43.257 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:43.516 aio_bdev 00:17:43.516 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 05c993cc-6b1b-4062-8076-4c823d580223 00:17:43.516 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=05c993cc-6b1b-4062-8076-4c823d580223 00:17:43.516 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:43.516 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:43.516 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:43.516 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:43.516 01:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:43.774 01:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 05c993cc-6b1b-4062-8076-4c823d580223 -t 2000 00:17:44.031 [ 00:17:44.031 { 00:17:44.031 "name": "05c993cc-6b1b-4062-8076-4c823d580223", 00:17:44.031 "aliases": [ 00:17:44.031 "lvs/lvol" 00:17:44.031 ], 00:17:44.031 "product_name": "Logical Volume", 00:17:44.031 "block_size": 4096, 00:17:44.031 "num_blocks": 38912, 00:17:44.031 "uuid": "05c993cc-6b1b-4062-8076-4c823d580223", 00:17:44.031 "assigned_rate_limits": { 00:17:44.031 "rw_ios_per_sec": 0, 00:17:44.031 "rw_mbytes_per_sec": 0, 00:17:44.031 "r_mbytes_per_sec": 0, 00:17:44.031 "w_mbytes_per_sec": 0 00:17:44.031 }, 00:17:44.031 "claimed": false, 00:17:44.031 "zoned": false, 00:17:44.031 "supported_io_types": { 00:17:44.031 "read": true, 00:17:44.031 "write": true, 00:17:44.031 "unmap": true, 00:17:44.031 "flush": false, 00:17:44.031 "reset": true, 00:17:44.031 "nvme_admin": false, 00:17:44.031 "nvme_io": false, 00:17:44.031 "nvme_io_md": false, 00:17:44.031 "write_zeroes": true, 00:17:44.031 "zcopy": false, 00:17:44.031 "get_zone_info": false, 00:17:44.031 "zone_management": false, 00:17:44.031 "zone_append": false, 00:17:44.031 "compare": false, 00:17:44.031 "compare_and_write": false, 00:17:44.031 "abort": false, 00:17:44.031 "seek_hole": true, 00:17:44.031 "seek_data": true, 00:17:44.031 "copy": false, 00:17:44.031 "nvme_iov_md": false 00:17:44.031 }, 00:17:44.031 "driver_specific": { 00:17:44.031 "lvol": { 00:17:44.031 "lvol_store_uuid": "3aeac5f9-f13a-4efa-97c6-1f6860319b31", 00:17:44.031 "base_bdev": "aio_bdev", 00:17:44.031 "thin_provision": false, 00:17:44.031 "num_allocated_clusters": 38, 00:17:44.031 "snapshot": false, 00:17:44.031 "clone": false, 00:17:44.031 "esnap_clone": false 00:17:44.031 } 00:17:44.031 } 00:17:44.031 } 00:17:44.031 ] 00:17:44.031 01:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:44.031 01:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3aeac5f9-f13a-4efa-97c6-1f6860319b31 00:17:44.031 01:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:44.287 01:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:44.287 01:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3aeac5f9-f13a-4efa-97c6-1f6860319b31 00:17:44.287 01:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:44.544 01:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:44.544 01:03:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 05c993cc-6b1b-4062-8076-4c823d580223 00:17:44.801 01:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3aeac5f9-f13a-4efa-97c6-1f6860319b31 00:17:45.058 01:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:45.316 01:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:45.317 00:17:45.317 real 0m19.127s 00:17:45.317 user 0m48.117s 00:17:45.317 sys 0m4.902s 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:45.317 ************************************ 00:17:45.317 END TEST lvs_grow_dirty 00:17:45.317 ************************************ 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:45.317 nvmf_trace.0 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:45.317 rmmod nvme_tcp 00:17:45.317 rmmod nvme_fabrics 00:17:45.317 rmmod nvme_keyring 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1126956 ']' 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1126956 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1126956 ']' 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1126956 00:17:45.317 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:17:45.575 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:45.575 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1126956 00:17:45.575 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:45.575 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:45.575 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1126956' 00:17:45.575 killing process with pid 1126956 00:17:45.576 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1126956 00:17:45.576 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1126956 00:17:45.576 01:03:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:45.576 01:03:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:45.576 01:03:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:45.576 01:03:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:45.576 01:03:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:45.576 01:03:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.576 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.576 01:03:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.112 01:03:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:48.112 00:17:48.112 real 0m41.943s 00:17:48.112 user 1m10.680s 00:17:48.112 sys 0m8.753s 00:17:48.112 01:03:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:48.112 01:03:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:48.112 ************************************ 00:17:48.112 END TEST nvmf_lvs_grow 00:17:48.112 ************************************ 00:17:48.112 01:03:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:48.112 01:03:37 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:48.112 01:03:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:48.112 01:03:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:48.112 01:03:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:48.112 ************************************ 00:17:48.112 START TEST nvmf_bdev_io_wait 00:17:48.112 ************************************ 00:17:48.112 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:48.112 * Looking for test storage... 00:17:48.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:48.112 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:48.112 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:48.113 01:03:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:50.069 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:50.069 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:50.069 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:50.069 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:50.069 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:50.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:17:50.070 00:17:50.070 --- 10.0.0.2 ping statistics --- 00:17:50.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.070 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:50.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:17:50.070 00:17:50.070 --- 10.0.0.1 ping statistics --- 00:17:50.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.070 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1129492 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1129492 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1129492 ']' 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:50.070 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:50.070 [2024-07-14 01:03:39.431423] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:50.070 [2024-07-14 01:03:39.431510] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.070 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.329 [2024-07-14 01:03:39.506263] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:50.329 [2024-07-14 01:03:39.604080] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.329 [2024-07-14 01:03:39.604142] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.329 [2024-07-14 01:03:39.604168] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.329 [2024-07-14 01:03:39.604182] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.329 [2024-07-14 01:03:39.604194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.329 [2024-07-14 01:03:39.604265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.329 [2024-07-14 01:03:39.604331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.329 [2024-07-14 01:03:39.604390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:50.329 [2024-07-14 01:03:39.604394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.329 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:50.329 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:17:50.329 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:50.329 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:50.329 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:50.329 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.329 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:50.329 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.329 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:50.329 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.329 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:50.329 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.329 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:50.588 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.588 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:50.588 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.588 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:50.588 [2024-07-14 01:03:39.780601] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.588 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.588 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:50.588 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.588 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:50.588 Malloc0 00:17:50.588 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.588 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:50.588 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.588 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:50.588 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.588 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:50.588 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.588 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:50.588 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:50.589 [2024-07-14 01:03:39.841586] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1129637 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1129638 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1129641 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:50.589 { 00:17:50.589 "params": { 00:17:50.589 "name": "Nvme$subsystem", 00:17:50.589 "trtype": "$TEST_TRANSPORT", 00:17:50.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:50.589 "adrfam": "ipv4", 00:17:50.589 "trsvcid": "$NVMF_PORT", 00:17:50.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:50.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:50.589 "hdgst": ${hdgst:-false}, 00:17:50.589 "ddgst": ${ddgst:-false} 00:17:50.589 }, 00:17:50.589 "method": "bdev_nvme_attach_controller" 00:17:50.589 } 00:17:50.589 EOF 00:17:50.589 )") 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1129643 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:50.589 { 00:17:50.589 "params": { 00:17:50.589 "name": "Nvme$subsystem", 00:17:50.589 "trtype": "$TEST_TRANSPORT", 00:17:50.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:50.589 "adrfam": "ipv4", 00:17:50.589 "trsvcid": "$NVMF_PORT", 00:17:50.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:50.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:50.589 "hdgst": ${hdgst:-false}, 00:17:50.589 "ddgst": ${ddgst:-false} 00:17:50.589 }, 00:17:50.589 "method": "bdev_nvme_attach_controller" 00:17:50.589 } 00:17:50.589 EOF 00:17:50.589 )") 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:50.589 { 00:17:50.589 "params": { 00:17:50.589 "name": "Nvme$subsystem", 00:17:50.589 "trtype": "$TEST_TRANSPORT", 00:17:50.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:50.589 "adrfam": "ipv4", 00:17:50.589 "trsvcid": "$NVMF_PORT", 00:17:50.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:50.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:50.589 "hdgst": ${hdgst:-false}, 00:17:50.589 "ddgst": ${ddgst:-false} 00:17:50.589 }, 00:17:50.589 "method": "bdev_nvme_attach_controller" 00:17:50.589 } 00:17:50.589 EOF 00:17:50.589 )") 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:50.589 { 00:17:50.589 "params": { 00:17:50.589 "name": "Nvme$subsystem", 00:17:50.589 "trtype": "$TEST_TRANSPORT", 00:17:50.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:50.589 "adrfam": "ipv4", 00:17:50.589 "trsvcid": "$NVMF_PORT", 00:17:50.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:50.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:50.589 "hdgst": ${hdgst:-false}, 00:17:50.589 "ddgst": ${ddgst:-false} 00:17:50.589 }, 00:17:50.589 "method": "bdev_nvme_attach_controller" 00:17:50.589 } 00:17:50.589 EOF 00:17:50.589 )") 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1129637 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:50.589 "params": { 00:17:50.589 "name": "Nvme1", 00:17:50.589 "trtype": "tcp", 00:17:50.589 "traddr": "10.0.0.2", 00:17:50.589 "adrfam": "ipv4", 00:17:50.589 "trsvcid": "4420", 00:17:50.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:50.589 "hdgst": false, 00:17:50.589 "ddgst": false 00:17:50.589 }, 00:17:50.589 "method": "bdev_nvme_attach_controller" 00:17:50.589 }' 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:50.589 "params": { 00:17:50.589 "name": "Nvme1", 00:17:50.589 "trtype": "tcp", 00:17:50.589 "traddr": "10.0.0.2", 00:17:50.589 "adrfam": "ipv4", 00:17:50.589 "trsvcid": "4420", 00:17:50.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:50.589 "hdgst": false, 00:17:50.589 "ddgst": false 00:17:50.589 }, 00:17:50.589 "method": "bdev_nvme_attach_controller" 00:17:50.589 }' 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:50.589 "params": { 00:17:50.589 "name": "Nvme1", 00:17:50.589 "trtype": "tcp", 00:17:50.589 "traddr": "10.0.0.2", 00:17:50.589 "adrfam": "ipv4", 00:17:50.589 "trsvcid": "4420", 00:17:50.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:50.589 "hdgst": false, 00:17:50.589 "ddgst": false 00:17:50.589 }, 00:17:50.589 "method": "bdev_nvme_attach_controller" 00:17:50.589 }' 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:50.589 01:03:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:50.589 "params": { 00:17:50.589 "name": "Nvme1", 00:17:50.589 "trtype": "tcp", 00:17:50.589 "traddr": "10.0.0.2", 00:17:50.589 "adrfam": "ipv4", 00:17:50.589 "trsvcid": "4420", 00:17:50.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:50.589 "hdgst": false, 00:17:50.589 "ddgst": false 00:17:50.589 }, 00:17:50.589 "method": "bdev_nvme_attach_controller" 00:17:50.589 }' 00:17:50.589 [2024-07-14 01:03:39.888950] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:50.589 [2024-07-14 01:03:39.889027] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:50.589 [2024-07-14 01:03:39.889571] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:50.589 [2024-07-14 01:03:39.889578] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:50.589 [2024-07-14 01:03:39.889577] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:50.589 [2024-07-14 01:03:39.889656] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 01:03:39.889656] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 01:03:39.889656] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:50.589 --proc-type=auto ] 00:17:50.589 --proc-type=auto ] 00:17:50.589 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.848 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.848 [2024-07-14 01:03:40.068053] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.848 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.848 [2024-07-14 01:03:40.143749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:50.848 [2024-07-14 01:03:40.167450] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.848 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.848 [2024-07-14 01:03:40.243312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:51.107 [2024-07-14 01:03:40.268331] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.107 [2024-07-14 01:03:40.342515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.107 [2024-07-14 01:03:40.347795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:51.107 [2024-07-14 01:03:40.412981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:17:51.366 Running I/O for 1 seconds... 00:17:51.366 Running I/O for 1 seconds... 00:17:51.366 Running I/O for 1 seconds... 00:17:51.366 Running I/O for 1 seconds... 00:17:52.302 00:17:52.302 Latency(us) 00:17:52.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.302 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:52.302 Nvme1n1 : 1.01 11438.25 44.68 0.00 0.00 11148.91 6310.87 19320.98 00:17:52.302 =================================================================================================================== 00:17:52.302 Total : 11438.25 44.68 0.00 0.00 11148.91 6310.87 19320.98 00:17:52.302 00:17:52.302 Latency(us) 00:17:52.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.302 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:52.302 Nvme1n1 : 1.01 10058.33 39.29 0.00 0.00 12698.53 3252.53 17087.91 00:17:52.302 =================================================================================================================== 00:17:52.302 Total : 10058.33 39.29 0.00 0.00 12698.53 3252.53 17087.91 00:17:52.302 00:17:52.302 Latency(us) 00:17:52.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.302 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:52.302 Nvme1n1 : 1.00 199310.83 778.56 0.00 0.00 639.73 273.07 849.54 00:17:52.302 =================================================================================================================== 00:17:52.302 Total : 199310.83 778.56 0.00 0.00 639.73 273.07 849.54 00:17:52.302 00:17:52.302 Latency(us) 00:17:52.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.302 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:52.302 Nvme1n1 : 1.03 3892.13 15.20 0.00 0.00 32467.00 11893.57 45049.93 00:17:52.302 =================================================================================================================== 00:17:52.302 Total : 3892.13 15.20 0.00 0.00 32467.00 11893.57 45049.93 00:17:52.560 01:03:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1129638 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1129641 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1129643 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:52.818 rmmod nvme_tcp 00:17:52.818 rmmod nvme_fabrics 00:17:52.818 rmmod nvme_keyring 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1129492 ']' 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1129492 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1129492 ']' 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1129492 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1129492 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1129492' 00:17:52.818 killing process with pid 1129492 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1129492 00:17:52.818 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1129492 00:17:53.077 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:53.077 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:53.077 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:53.077 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.077 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.077 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.077 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.077 01:03:42 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.978 01:03:44 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:54.978 00:17:54.978 real 0m7.305s 00:17:54.978 user 0m14.916s 00:17:54.978 sys 0m3.749s 00:17:54.978 01:03:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:54.978 01:03:44 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:54.978 ************************************ 00:17:54.978 END TEST nvmf_bdev_io_wait 00:17:54.978 ************************************ 00:17:55.236 01:03:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:55.236 01:03:44 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:55.236 01:03:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:55.236 01:03:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:55.236 01:03:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.236 ************************************ 00:17:55.236 START TEST nvmf_queue_depth 00:17:55.236 ************************************ 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:55.236 * Looking for test storage... 00:17:55.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:55.236 01:03:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.134 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:57.134 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:57.134 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:57.134 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:57.134 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:57.134 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:57.134 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:57.135 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:57.135 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:57.135 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:57.135 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:57.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:17:57.135 00:17:57.135 --- 10.0.0.2 ping statistics --- 00:17:57.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.135 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:17:57.135 00:17:57.135 --- 10.0.0.1 ping statistics --- 00:17:57.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.135 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:57.135 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:57.394 01:03:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:57.394 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:57.394 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:57.394 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.394 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1131855 00:17:57.394 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:57.394 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1131855 00:17:57.394 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1131855 ']' 00:17:57.394 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.394 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.394 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.394 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.394 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.394 [2024-07-14 01:03:46.618229] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:57.394 [2024-07-14 01:03:46.618312] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.394 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.394 [2024-07-14 01:03:46.687203] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.394 [2024-07-14 01:03:46.777234] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.395 [2024-07-14 01:03:46.777294] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.395 [2024-07-14 01:03:46.777310] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.395 [2024-07-14 01:03:46.777324] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.395 [2024-07-14 01:03:46.777335] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.395 [2024-07-14 01:03:46.777372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.653 [2024-07-14 01:03:46.926487] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.653 Malloc0 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.653 [2024-07-14 01:03:46.986523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1131880 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:57.653 01:03:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:57.654 01:03:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1131880 /var/tmp/bdevperf.sock 00:17:57.654 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1131880 ']' 00:17:57.654 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.654 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.654 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.654 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.654 01:03:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:57.654 [2024-07-14 01:03:47.031437] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:57.654 [2024-07-14 01:03:47.031513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131880 ] 00:17:57.654 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.912 [2024-07-14 01:03:47.092212] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.912 [2024-07-14 01:03:47.180399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.912 01:03:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.912 01:03:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:57.912 01:03:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:57.912 01:03:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.912 01:03:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:58.171 NVMe0n1 00:17:58.171 01:03:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.171 01:03:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:58.171 Running I/O for 10 seconds... 00:18:10.378 00:18:10.378 Latency(us) 00:18:10.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.378 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:10.378 Verification LBA range: start 0x0 length 0x4000 00:18:10.378 NVMe0n1 : 10.09 8220.16 32.11 0.00 0.00 124061.67 22524.97 79225.74 00:18:10.378 =================================================================================================================== 00:18:10.378 Total : 8220.16 32.11 0.00 0.00 124061.67 22524.97 79225.74 00:18:10.378 0 00:18:10.378 01:03:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1131880 00:18:10.378 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1131880 ']' 00:18:10.378 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1131880 00:18:10.378 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:10.378 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.378 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1131880 00:18:10.378 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:10.378 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:10.378 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1131880' 00:18:10.378 killing process with pid 1131880 00:18:10.378 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1131880 00:18:10.378 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.378 00:18:10.378 Latency(us) 00:18:10.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.378 =================================================================================================================== 00:18:10.378 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1131880 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:10.379 rmmod nvme_tcp 00:18:10.379 rmmod nvme_fabrics 00:18:10.379 rmmod nvme_keyring 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1131855 ']' 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1131855 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1131855 ']' 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1131855 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1131855 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1131855' 00:18:10.379 killing process with pid 1131855 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1131855 00:18:10.379 01:03:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1131855 00:18:10.379 01:03:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:10.379 01:03:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:10.379 01:03:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:10.379 01:03:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:10.379 01:03:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:10.379 01:03:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.379 01:03:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:10.379 01:03:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.946 01:04:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:10.946 00:18:10.946 real 0m15.868s 00:18:10.946 user 0m22.329s 00:18:10.946 sys 0m3.031s 00:18:10.946 01:04:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:10.946 01:04:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:10.946 ************************************ 00:18:10.946 END TEST nvmf_queue_depth 00:18:10.946 ************************************ 00:18:10.946 01:04:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:10.946 01:04:00 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:10.946 01:04:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:10.946 01:04:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:10.946 01:04:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:10.946 ************************************ 00:18:10.946 START TEST nvmf_target_multipath 00:18:10.946 ************************************ 00:18:10.946 01:04:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:11.205 * Looking for test storage... 00:18:11.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:11.205 01:04:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:13.108 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:13.109 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:13.109 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:13.109 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:13.109 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:13.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:18:13.109 00:18:13.109 --- 10.0.0.2 ping statistics --- 00:18:13.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.109 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:13.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:18:13.109 00:18:13.109 --- 10.0.0.1 ping statistics --- 00:18:13.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.109 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:13.109 only one NIC for nvmf test 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:13.109 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:13.109 rmmod nvme_tcp 00:18:13.109 rmmod nvme_fabrics 00:18:13.369 rmmod nvme_keyring 00:18:13.369 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:13.369 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:13.369 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:13.369 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:13.369 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:13.369 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:13.369 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:13.369 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.369 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:13.369 01:04:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.369 01:04:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.369 01:04:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:15.276 00:18:15.276 real 0m4.258s 00:18:15.276 user 0m0.796s 00:18:15.276 sys 0m1.452s 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:15.276 01:04:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:15.276 ************************************ 00:18:15.276 END TEST nvmf_target_multipath 00:18:15.276 ************************************ 00:18:15.276 01:04:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:15.277 01:04:04 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:15.277 01:04:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:15.277 01:04:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.277 01:04:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:15.277 ************************************ 00:18:15.277 START TEST nvmf_zcopy 00:18:15.277 ************************************ 00:18:15.277 01:04:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:15.534 * Looking for test storage... 00:18:15.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:15.534 01:04:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:17.437 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:17.437 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:17.437 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:17.437 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:17.437 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:17.438 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.438 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.438 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:17.438 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:17.438 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:17.438 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:17.438 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:17.438 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:17.438 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.438 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:17.438 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:17.438 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:17.438 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.438 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.438 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.438 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:17.438 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:17.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:18:17.695 00:18:17.695 --- 10.0.0.2 ping statistics --- 00:18:17.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.695 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:18:17.695 00:18:17.695 --- 10.0.0.1 ping statistics --- 00:18:17.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.695 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1137041 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1137041 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1137041 ']' 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.695 01:04:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.696 01:04:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.696 01:04:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.696 [2024-07-14 01:04:06.964399] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:17.696 [2024-07-14 01:04:06.964478] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.696 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.696 [2024-07-14 01:04:07.036973] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.954 [2024-07-14 01:04:07.130591] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.954 [2024-07-14 01:04:07.130654] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.954 [2024-07-14 01:04:07.130672] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.954 [2024-07-14 01:04:07.130685] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.954 [2024-07-14 01:04:07.130697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.954 [2024-07-14 01:04:07.130734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.954 [2024-07-14 01:04:07.269317] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.954 [2024-07-14 01:04:07.285489] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.954 malloc0 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:17.954 { 00:18:17.954 "params": { 00:18:17.954 "name": "Nvme$subsystem", 00:18:17.954 "trtype": "$TEST_TRANSPORT", 00:18:17.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:17.954 "adrfam": "ipv4", 00:18:17.954 "trsvcid": "$NVMF_PORT", 00:18:17.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:17.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:17.954 "hdgst": ${hdgst:-false}, 00:18:17.954 "ddgst": ${ddgst:-false} 00:18:17.954 }, 00:18:17.954 "method": "bdev_nvme_attach_controller" 00:18:17.954 } 00:18:17.954 EOF 00:18:17.954 )") 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:17.954 01:04:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:17.954 "params": { 00:18:17.954 "name": "Nvme1", 00:18:17.954 "trtype": "tcp", 00:18:17.954 "traddr": "10.0.0.2", 00:18:17.954 "adrfam": "ipv4", 00:18:17.954 "trsvcid": "4420", 00:18:17.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.954 "hdgst": false, 00:18:17.954 "ddgst": false 00:18:17.954 }, 00:18:17.954 "method": "bdev_nvme_attach_controller" 00:18:17.954 }' 00:18:17.954 [2024-07-14 01:04:07.362675] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:17.954 [2024-07-14 01:04:07.362756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137072 ] 00:18:18.213 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.213 [2024-07-14 01:04:07.426913] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.213 [2024-07-14 01:04:07.523756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.473 Running I/O for 10 seconds... 00:18:28.506 00:18:28.506 Latency(us) 00:18:28.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.506 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:28.506 Verification LBA range: start 0x0 length 0x1000 00:18:28.506 Nvme1n1 : 10.01 5932.57 46.35 0.00 0.00 21502.95 3519.53 37088.52 00:18:28.506 =================================================================================================================== 00:18:28.506 Total : 5932.57 46.35 0.00 0.00 21502.95 3519.53 37088.52 00:18:28.767 01:04:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1138261 00:18:28.767 01:04:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:28.767 01:04:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.767 01:04:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:28.767 01:04:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:28.767 01:04:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:28.767 01:04:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:28.767 01:04:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:28.767 01:04:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:28.767 { 00:18:28.767 "params": { 00:18:28.767 "name": "Nvme$subsystem", 00:18:28.767 "trtype": "$TEST_TRANSPORT", 00:18:28.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.767 "adrfam": "ipv4", 00:18:28.767 "trsvcid": "$NVMF_PORT", 00:18:28.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.767 "hdgst": ${hdgst:-false}, 00:18:28.767 "ddgst": ${ddgst:-false} 00:18:28.767 }, 00:18:28.767 "method": "bdev_nvme_attach_controller" 00:18:28.767 } 00:18:28.767 EOF 00:18:28.767 )") 00:18:28.767 01:04:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:28.767 [2024-07-14 01:04:17.994598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:17.994643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 01:04:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:28.767 01:04:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:28.767 01:04:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:28.767 "params": { 00:18:28.767 "name": "Nvme1", 00:18:28.767 "trtype": "tcp", 00:18:28.767 "traddr": "10.0.0.2", 00:18:28.767 "adrfam": "ipv4", 00:18:28.767 "trsvcid": "4420", 00:18:28.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.767 "hdgst": false, 00:18:28.767 "ddgst": false 00:18:28.767 }, 00:18:28.767 "method": "bdev_nvme_attach_controller" 00:18:28.767 }' 00:18:28.767 [2024-07-14 01:04:18.002554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.002582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.010572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.010597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.018595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.018621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.026617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.026642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.031929] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:28.767 [2024-07-14 01:04:18.032000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1138261 ] 00:18:28.767 [2024-07-14 01:04:18.034639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.034665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.042660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.042684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.050682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.050706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.058704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.058728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.767 [2024-07-14 01:04:18.066729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.066754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.074749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.074773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.082773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.082798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.090795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.090819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.094028] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.767 [2024-07-14 01:04:18.098838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.098876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.106863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.106920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.114862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.114912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.122892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.122929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.130927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.130948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.138945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.138966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.146995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.147025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.155007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.155037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.162999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.163021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.171020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.171043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.767 [2024-07-14 01:04:18.179043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.767 [2024-07-14 01:04:18.179066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.187060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.187081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.190711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.028 [2024-07-14 01:04:18.195080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.195101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.203107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.203131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.211190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.211229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.219204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.219241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.227249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.227294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.235264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.235302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.243294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.243334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.251305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.251356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.259298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.259324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.267361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.267398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.275391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.275434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.283389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.283428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.291377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.291403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.299412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.299437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.307491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.307518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.315452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.315480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.323514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.323540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.331498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.331526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.339521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.339548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.347542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.347569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.355562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.355588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.363584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.363609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.371607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.371632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.379634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.379658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.387697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.387722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.395682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.395708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.403704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.403738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.411728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.411752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.419752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.419776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.427776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.427800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.028 [2024-07-14 01:04:18.435803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.028 [2024-07-14 01:04:18.435831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.288 [2024-07-14 01:04:18.443824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.288 [2024-07-14 01:04:18.443850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.288 [2024-07-14 01:04:18.451848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.288 [2024-07-14 01:04:18.451881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.288 [2024-07-14 01:04:18.459877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.288 [2024-07-14 01:04:18.459926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.467915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.467945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.475943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.475966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.483956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.483978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.492000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.492026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 Running I/O for 5 seconds... 00:18:29.289 [2024-07-14 01:04:18.499999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.500022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.512219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.512247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.521973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.522001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.533735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.533762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.544545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.544573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.558196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.558223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.568916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.568944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.579889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.579917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.592772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.592798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.602438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.602465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.613881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.613908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.624748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.624775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.635625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.635651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.646389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.646416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.657301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.657328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.668345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.668371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.679358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.679385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.692127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.692170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.289 [2024-07-14 01:04:18.701653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.289 [2024-07-14 01:04:18.701680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.549 [2024-07-14 01:04:18.713299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.549 [2024-07-14 01:04:18.713326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.549 [2024-07-14 01:04:18.724448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.549 [2024-07-14 01:04:18.724475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.549 [2024-07-14 01:04:18.735404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.549 [2024-07-14 01:04:18.735431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.549 [2024-07-14 01:04:18.748274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.549 [2024-07-14 01:04:18.748301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.549 [2024-07-14 01:04:18.758466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.549 [2024-07-14 01:04:18.758492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.549 [2024-07-14 01:04:18.769710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.549 [2024-07-14 01:04:18.769737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.549 [2024-07-14 01:04:18.781150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.549 [2024-07-14 01:04:18.781177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.549 [2024-07-14 01:04:18.794261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.549 [2024-07-14 01:04:18.794287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.549 [2024-07-14 01:04:18.804623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.549 [2024-07-14 01:04:18.804650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.549 [2024-07-14 01:04:18.815762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.549 [2024-07-14 01:04:18.815789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.549 [2024-07-14 01:04:18.828508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.549 [2024-07-14 01:04:18.828534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.549 [2024-07-14 01:04:18.838513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.549 [2024-07-14 01:04:18.838540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.549 [2024-07-14 01:04:18.849858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.549 [2024-07-14 01:04:18.849891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.549 [2024-07-14 01:04:18.862350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.549 [2024-07-14 01:04:18.862377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.549 [2024-07-14 01:04:18.872132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.549 [2024-07-14 01:04:18.872158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.549 [2024-07-14 01:04:18.883202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.549 [2024-07-14 01:04:18.883228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.549 [2024-07-14 01:04:18.893569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.549 [2024-07-14 01:04:18.893596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.549 [2024-07-14 01:04:18.904203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.549 [2024-07-14 01:04:18.904230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.550 [2024-07-14 01:04:18.914910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.550 [2024-07-14 01:04:18.914937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.550 [2024-07-14 01:04:18.925575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.550 [2024-07-14 01:04:18.925602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.550 [2024-07-14 01:04:18.936709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.550 [2024-07-14 01:04:18.936735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.550 [2024-07-14 01:04:18.949616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.550 [2024-07-14 01:04:18.949643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.550 [2024-07-14 01:04:18.960012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.550 [2024-07-14 01:04:18.960038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:18.970802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:18.970830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:18.983346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:18.983373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:18.993807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:18.993841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.004964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.004991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.015880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.015923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.027018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.027044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.038150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.038176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.049148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.049175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.060050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.060077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.072765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.072792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.082561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.082603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.094175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.094201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.104839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.104873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.115730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.115756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.128423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.128449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.137417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.137444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.150819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.150846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.160969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.160995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.172352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.172379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.183552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.183579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.194848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.194882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.205644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.205678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:29.809 [2024-07-14 01:04:19.218570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:29.809 [2024-07-14 01:04:19.218598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.228111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.228138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.238927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.238954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.249387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.249413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.259816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.259843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.270256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.270283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.281174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.281201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.293884] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.293912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.303419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.303446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.314573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.314617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.325450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.325478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.336130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.336173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.346898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.346925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.357874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.357923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.369457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.369485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.380854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.380891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.391785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.391812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.402575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.402602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.413251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.413285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.423880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.423906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.434827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.434855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.446153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.446180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.459179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.459207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.469310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.469338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.068 [2024-07-14 01:04:19.480655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.068 [2024-07-14 01:04:19.480682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.491159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.491186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.501874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.501900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.512603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.512630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.523489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.523516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.535600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.535627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.545180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.545207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.556238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.556265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.566562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.566588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.577335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.577361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.590251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.590279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.600348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.600375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.612069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.612096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.622364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.622399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.633192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.633218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.645828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.645854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.655654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.655681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.667093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.667119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.679716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.679743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.689601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.689628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.701138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.701165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.712058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.712086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.722639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.722666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.327 [2024-07-14 01:04:19.734067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.327 [2024-07-14 01:04:19.734095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.744955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.744983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.757834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.757861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.767518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.767545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.778805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.778832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.791489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.791515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.800882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.800909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.812297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.812323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.822978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.823005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.833104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.833138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.843755] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.843781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.854507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.854534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.865606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.865633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.876440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.876466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.887450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.887477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.898881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.898908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.909695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.909722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.920641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.920668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.931571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.931599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.942695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.942723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.953476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.953503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.964273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.964300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.974973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.975000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.986043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.986070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.587 [2024-07-14 01:04:19.996593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.587 [2024-07-14 01:04:19.996620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.008824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.008862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.019037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.019071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.030403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.030433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.040863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.040901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.051578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.051607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.063808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.063837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.073178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.073205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.083894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.083948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.096404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.096431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.106605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.106632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.116849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.116884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.127859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.127893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.138172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.138199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.148767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.148794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.160721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.160747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.170001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.170027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.180935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.180962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.192899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.192926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.202115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.202141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.213285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.213312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.225501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.225528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.235065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.235093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.246007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.246034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:30.847 [2024-07-14 01:04:20.256246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:30.847 [2024-07-14 01:04:20.256273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.266408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.266435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.276750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.276777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.288990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.289016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.298355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.298381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.309676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.309703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.319917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.319943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.331207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.331233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.341648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.341675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.351969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.351996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.362414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.362440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.372916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.372943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.383277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.383304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.393766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.393793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.404365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.404392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.416620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.416646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.426296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.426323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.437300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.437327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.447841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.447877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.457419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.457459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.468464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.468493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.478787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.478814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.489455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.489483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.499932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.499959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.105 [2024-07-14 01:04:20.510757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.105 [2024-07-14 01:04:20.510784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.523014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.523042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.532265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.532293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.543357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.543384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.553745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.553771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.564180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.564208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.575054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.575081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.585683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.585710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.596397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.596424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.608835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.608863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.618300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.618327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.629505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.629545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.640040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.640075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.650721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.650748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.661417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.661444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.672350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.672376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.683321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.683348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.694330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.694357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.705203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.705230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.716138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.716165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.726846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.726881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.737388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.737415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.748126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.748153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.758955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.758982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.365 [2024-07-14 01:04:20.771721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.365 [2024-07-14 01:04:20.771748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.781578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.781605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.792451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.792478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.803203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.803230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.813966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.813999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.824515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.824542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.836966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.836993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.846939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.846972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.858107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.858134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.868707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.868734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.879373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.879414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.889941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.889968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.900553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.900580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.911560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.911586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.922274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.922301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.934949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.934976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.943916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.943943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.955524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.955551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.966384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.966411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.977193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.977219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.987791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.987818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:20.997820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:20.997847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:21.008678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:21.008706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:21.019685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.625 [2024-07-14 01:04:21.019712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.625 [2024-07-14 01:04:21.030346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.626 [2024-07-14 01:04:21.030374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.885 [2024-07-14 01:04:21.041225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.885 [2024-07-14 01:04:21.041253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.885 [2024-07-14 01:04:21.052080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.885 [2024-07-14 01:04:21.052114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.885 [2024-07-14 01:04:21.062977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.885 [2024-07-14 01:04:21.063004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.885 [2024-07-14 01:04:21.073641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.885 [2024-07-14 01:04:21.073668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.885 [2024-07-14 01:04:21.084459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.885 [2024-07-14 01:04:21.084486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.885 [2024-07-14 01:04:21.095022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.885 [2024-07-14 01:04:21.095048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.885 [2024-07-14 01:04:21.107505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.885 [2024-07-14 01:04:21.107532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.885 [2024-07-14 01:04:21.116935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.885 [2024-07-14 01:04:21.116962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.885 [2024-07-14 01:04:21.127761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.885 [2024-07-14 01:04:21.127788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.885 [2024-07-14 01:04:21.138550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.885 [2024-07-14 01:04:21.138577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.885 [2024-07-14 01:04:21.149260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.885 [2024-07-14 01:04:21.149286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.885 [2024-07-14 01:04:21.159738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.885 [2024-07-14 01:04:21.159765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.886 [2024-07-14 01:04:21.170059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.886 [2024-07-14 01:04:21.170086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.886 [2024-07-14 01:04:21.182499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.886 [2024-07-14 01:04:21.182526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.886 [2024-07-14 01:04:21.192608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.886 [2024-07-14 01:04:21.192651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.886 [2024-07-14 01:04:21.203765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.886 [2024-07-14 01:04:21.203794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.886 [2024-07-14 01:04:21.214403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.886 [2024-07-14 01:04:21.214430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.886 [2024-07-14 01:04:21.225076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.886 [2024-07-14 01:04:21.225102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.886 [2024-07-14 01:04:21.235842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.886 [2024-07-14 01:04:21.235877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.886 [2024-07-14 01:04:21.246640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.886 [2024-07-14 01:04:21.246668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.886 [2024-07-14 01:04:21.256964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.886 [2024-07-14 01:04:21.257006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.886 [2024-07-14 01:04:21.268039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.886 [2024-07-14 01:04:21.268065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.886 [2024-07-14 01:04:21.278694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.886 [2024-07-14 01:04:21.278721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:31.886 [2024-07-14 01:04:21.289332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:31.886 [2024-07-14 01:04:21.289358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.146 [2024-07-14 01:04:21.300139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.146 [2024-07-14 01:04:21.300166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.146 [2024-07-14 01:04:21.310572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.146 [2024-07-14 01:04:21.310599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.146 [2024-07-14 01:04:21.321181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.146 [2024-07-14 01:04:21.321208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.146 [2024-07-14 01:04:21.332106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.146 [2024-07-14 01:04:21.332133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.146 [2024-07-14 01:04:21.343010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.146 [2024-07-14 01:04:21.343037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.146 [2024-07-14 01:04:21.353707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.146 [2024-07-14 01:04:21.353734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.146 [2024-07-14 01:04:21.364045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.146 [2024-07-14 01:04:21.364072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.146 [2024-07-14 01:04:21.375013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.146 [2024-07-14 01:04:21.375040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.146 [2024-07-14 01:04:21.385710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.146 [2024-07-14 01:04:21.385737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.146 [2024-07-14 01:04:21.396344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.146 [2024-07-14 01:04:21.396371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.146 [2024-07-14 01:04:21.407418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.147 [2024-07-14 01:04:21.407445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.147 [2024-07-14 01:04:21.420294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.147 [2024-07-14 01:04:21.420320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.147 [2024-07-14 01:04:21.430417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.147 [2024-07-14 01:04:21.430444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.147 [2024-07-14 01:04:21.441630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.147 [2024-07-14 01:04:21.441657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.147 [2024-07-14 01:04:21.452642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.147 [2024-07-14 01:04:21.452669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.147 [2024-07-14 01:04:21.464010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.147 [2024-07-14 01:04:21.464047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.147 [2024-07-14 01:04:21.474923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.147 [2024-07-14 01:04:21.474951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.147 [2024-07-14 01:04:21.485744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.147 [2024-07-14 01:04:21.485770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.147 [2024-07-14 01:04:21.496343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.147 [2024-07-14 01:04:21.496370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.147 [2024-07-14 01:04:21.507241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.147 [2024-07-14 01:04:21.507268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.147 [2024-07-14 01:04:21.517985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.147 [2024-07-14 01:04:21.518012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.147 [2024-07-14 01:04:21.530624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.147 [2024-07-14 01:04:21.530651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.147 [2024-07-14 01:04:21.540371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.147 [2024-07-14 01:04:21.540398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.147 [2024-07-14 01:04:21.551554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.147 [2024-07-14 01:04:21.551581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.562550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.562578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.573471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.573498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.586244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.586271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.595719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.595746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.607198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.607226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.617894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.617922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.628201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.628229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.639041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.639070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.649635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.649662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.660398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.660425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.671280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.671309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.683673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.683700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.693330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.693357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.704975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.705001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.715899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.715927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.726412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.726438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.736991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.737017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.747166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.747192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.758372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.758399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.769482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.769509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.782705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.782732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.792766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.792793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.803556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.803583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.405 [2024-07-14 01:04:21.813944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.405 [2024-07-14 01:04:21.813970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:21.824596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:21.824623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:21.835526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:21.835553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:21.846363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:21.846391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:21.857113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:21.857140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:21.868416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:21.868443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:21.878736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:21.878763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:21.888924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:21.888951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:21.899479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:21.899506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:21.910173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:21.910200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:21.921381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:21.921408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:21.932263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:21.932290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:21.943179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:21.943207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:21.953716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:21.953743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:21.964211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:21.964238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:21.975347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:21.975374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:21.986487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:21.986513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:21.997264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:21.997291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:22.007887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:22.007914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:22.018623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:22.018649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:22.029045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:22.029073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:22.038977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:22.039003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:22.050082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:22.050109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:22.060894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:22.060921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.663 [2024-07-14 01:04:22.071568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.663 [2024-07-14 01:04:22.071595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.082555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.082582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.093374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.093402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.103962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.103989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.114617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.114643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.125684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.125710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.138291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.138318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.148822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.148849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.159614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.159641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.170469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.170495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.181500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.181527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.194228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.194255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.203991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.204018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.215510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.215551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.226208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.226234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.236690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.236717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.248050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.248078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.259217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.259244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.271720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.271747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.281179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.281206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.292264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.292291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.303267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.303294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.316074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.316101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.325635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.325662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:32.924 [2024-07-14 01:04:22.337362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:32.924 [2024-07-14 01:04:22.337389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.350152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.350179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.359901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.359927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.371375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.371401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.382068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.382095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.393237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.393264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.404303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.404329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.415347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.415373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.426239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.426265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.437260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.437289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.448118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.448145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.458993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.459020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.470248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.470277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.481056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.481083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.493582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.493616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.503133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.503160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.514781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.514824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.527713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.527756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.537612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.537639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.548998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.549025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.559989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.560016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.571129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.571155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.582104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.582145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.185 [2024-07-14 01:04:22.592555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.185 [2024-07-14 01:04:22.592581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.444 [2024-07-14 01:04:22.602620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.444 [2024-07-14 01:04:22.602648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.444 [2024-07-14 01:04:22.613917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.444 [2024-07-14 01:04:22.613944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.444 [2024-07-14 01:04:22.624782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.444 [2024-07-14 01:04:22.624809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.444 [2024-07-14 01:04:22.637197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.444 [2024-07-14 01:04:22.637224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.444 [2024-07-14 01:04:22.646792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.444 [2024-07-14 01:04:22.646821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.444 [2024-07-14 01:04:22.658687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.444 [2024-07-14 01:04:22.658714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.444 [2024-07-14 01:04:22.671442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.444 [2024-07-14 01:04:22.671468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.444 [2024-07-14 01:04:22.681237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.444 [2024-07-14 01:04:22.681264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.444 [2024-07-14 01:04:22.692752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.444 [2024-07-14 01:04:22.692794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.444 [2024-07-14 01:04:22.703701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.444 [2024-07-14 01:04:22.703737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.444 [2024-07-14 01:04:22.714209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.444 [2024-07-14 01:04:22.714236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.444 [2024-07-14 01:04:22.724819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.444 [2024-07-14 01:04:22.724862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.444 [2024-07-14 01:04:22.735700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.444 [2024-07-14 01:04:22.735728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.444 [2024-07-14 01:04:22.756635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.445 [2024-07-14 01:04:22.756665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.445 [2024-07-14 01:04:22.767460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.445 [2024-07-14 01:04:22.767488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.445 [2024-07-14 01:04:22.778559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.445 [2024-07-14 01:04:22.778586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.445 [2024-07-14 01:04:22.789441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.445 [2024-07-14 01:04:22.789468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.445 [2024-07-14 01:04:22.800573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.445 [2024-07-14 01:04:22.800601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.445 [2024-07-14 01:04:22.811650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.445 [2024-07-14 01:04:22.811678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.445 [2024-07-14 01:04:22.822676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.445 [2024-07-14 01:04:22.822703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.445 [2024-07-14 01:04:22.833635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.445 [2024-07-14 01:04:22.833662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.445 [2024-07-14 01:04:22.844533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.445 [2024-07-14 01:04:22.844560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.445 [2024-07-14 01:04:22.857217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.445 [2024-07-14 01:04:22.857245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:22.867211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:22.867239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:22.878919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:22.878946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:22.889982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:22.890014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:22.901009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:22.901035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:22.911939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:22.911966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:22.922904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:22.922940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:22.933733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:22.933760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:22.944627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:22.944654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:22.955658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:22.955686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:22.966384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:22.966411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:22.977165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:22.977192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:22.988177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:22.988204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:22.999075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:22.999102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:23.010049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:23.010076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:23.022282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:23.022309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:23.031367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:23.031394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:23.042921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:23.042950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:23.055553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:23.055580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:23.065998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:23.066025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:23.077362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:23.077391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:23.088294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:23.088320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:23.099441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:23.099468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.704 [2024-07-14 01:04:23.112458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.704 [2024-07-14 01:04:23.112485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.122573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.122602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.133737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.133771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.144796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.144823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.155903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.155930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.167098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.167125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.178035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.178062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.190891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.190918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.201037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.201064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.211975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.212002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.223225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.223253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.234549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.234577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.245559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.245585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.256461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.256488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.267142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.267170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.277825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.277853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.290628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.290655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.300045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.300071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.311302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.311328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.321826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.321852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.332425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.332452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.344009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.344037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.354767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.354796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.963 [2024-07-14 01:04:23.365928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.963 [2024-07-14 01:04:23.365955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.378718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.378745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.388717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.388745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.399152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.399179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.410074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.410101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.421141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.421167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.431814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.431841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.444590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.444617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.453681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.453724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.465050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.465077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.476306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.476333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.487446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.487472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.498874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.498917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.510186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.510212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.519426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.519452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 00:18:34.221 Latency(us) 00:18:34.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.221 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:34.221 Nvme1n1 : 5.01 11772.28 91.97 0.00 0.00 10859.56 4854.52 20680.25 00:18:34.221 =================================================================================================================== 00:18:34.221 Total : 11772.28 91.97 0.00 0.00 10859.56 4854.52 20680.25 00:18:34.221 [2024-07-14 01:04:23.524946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.524985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.532976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.533000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.541044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.541082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.549092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.549143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.557090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.557136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.565108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.565155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.573133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.573180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.581152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.581200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.589177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.589223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.597199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.597246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.605220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.605267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.613243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.613292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.621266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.621313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.221 [2024-07-14 01:04:23.629285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.221 [2024-07-14 01:04:23.629331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.479 [2024-07-14 01:04:23.637301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.479 [2024-07-14 01:04:23.637348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.479 [2024-07-14 01:04:23.645332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.479 [2024-07-14 01:04:23.645378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.479 [2024-07-14 01:04:23.653315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.479 [2024-07-14 01:04:23.653341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.479 [2024-07-14 01:04:23.661350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.479 [2024-07-14 01:04:23.661392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.479 [2024-07-14 01:04:23.669385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.479 [2024-07-14 01:04:23.669428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.479 [2024-07-14 01:04:23.677416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.479 [2024-07-14 01:04:23.677465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.479 [2024-07-14 01:04:23.685404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.479 [2024-07-14 01:04:23.685430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.479 [2024-07-14 01:04:23.693424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.479 [2024-07-14 01:04:23.693457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.479 [2024-07-14 01:04:23.701482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.479 [2024-07-14 01:04:23.701531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.479 [2024-07-14 01:04:23.709505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.479 [2024-07-14 01:04:23.709549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.479 [2024-07-14 01:04:23.717475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.479 [2024-07-14 01:04:23.717499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.479 [2024-07-14 01:04:23.725506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.479 [2024-07-14 01:04:23.725532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.479 [2024-07-14 01:04:23.733519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.479 [2024-07-14 01:04:23.733543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1138261) - No such process 00:18:34.479 01:04:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1138261 00:18:34.479 01:04:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:34.479 01:04:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.479 01:04:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:34.479 01:04:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.479 01:04:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:34.479 01:04:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.479 01:04:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:34.479 delay0 00:18:34.479 01:04:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.479 01:04:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:34.479 01:04:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.479 01:04:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:34.479 01:04:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.479 01:04:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:34.479 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.479 [2024-07-14 01:04:23.848085] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:41.051 Initializing NVMe Controllers 00:18:41.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:41.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:41.051 Initialization complete. Launching workers. 00:18:41.051 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 93 00:18:41.051 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 380, failed to submit 33 00:18:41.051 success 199, unsuccess 181, failed 0 00:18:41.051 01:04:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:41.051 01:04:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:41.051 01:04:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:41.051 01:04:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:41.051 01:04:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:41.051 01:04:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:41.051 01:04:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:41.051 01:04:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:41.051 rmmod nvme_tcp 00:18:41.051 rmmod nvme_fabrics 00:18:41.051 rmmod nvme_keyring 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1137041 ']' 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1137041 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1137041 ']' 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1137041 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1137041 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1137041' 00:18:41.051 killing process with pid 1137041 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1137041 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1137041 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.051 01:04:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.957 01:04:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:42.957 00:18:42.957 real 0m27.706s 00:18:42.957 user 0m41.078s 00:18:42.957 sys 0m8.093s 00:18:42.957 01:04:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:42.957 01:04:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:42.957 ************************************ 00:18:42.957 END TEST nvmf_zcopy 00:18:42.957 ************************************ 00:18:43.215 01:04:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:43.215 01:04:32 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:43.215 01:04:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:43.215 01:04:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:43.215 01:04:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:43.215 ************************************ 00:18:43.215 START TEST nvmf_nmic 00:18:43.215 ************************************ 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:43.215 * Looking for test storage... 00:18:43.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:43.215 01:04:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:45.118 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:45.118 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:45.118 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:45.118 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.118 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:45.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:18:45.380 00:18:45.380 --- 10.0.0.2 ping statistics --- 00:18:45.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.380 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:18:45.380 00:18:45.380 --- 10.0.0.1 ping statistics --- 00:18:45.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.380 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1141635 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1141635 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1141635 ']' 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.380 01:04:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.380 [2024-07-14 01:04:34.701222] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:45.380 [2024-07-14 01:04:34.701316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.380 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.380 [2024-07-14 01:04:34.780705] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:45.640 [2024-07-14 01:04:34.875685] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.641 [2024-07-14 01:04:34.875740] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.641 [2024-07-14 01:04:34.875771] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.641 [2024-07-14 01:04:34.875782] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.641 [2024-07-14 01:04:34.875792] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.641 [2024-07-14 01:04:34.875851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.641 [2024-07-14 01:04:34.875910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.641 [2024-07-14 01:04:34.875978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:45.641 [2024-07-14 01:04:34.879895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.641 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:45.641 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:18:45.641 01:04:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:45.641 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:45.641 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.641 01:04:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.641 01:04:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:45.641 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.641 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.641 [2024-07-14 01:04:35.039803] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.641 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.641 01:04:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:45.641 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.641 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.901 Malloc0 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.901 [2024-07-14 01:04:35.093560] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:45.901 test case1: single bdev can't be used in multiple subsystems 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.901 [2024-07-14 01:04:35.117407] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:45.901 [2024-07-14 01:04:35.117437] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:45.901 [2024-07-14 01:04:35.117469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.901 request: 00:18:45.901 { 00:18:45.901 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:45.901 "namespace": { 00:18:45.901 "bdev_name": "Malloc0", 00:18:45.901 "no_auto_visible": false 00:18:45.901 }, 00:18:45.901 "method": "nvmf_subsystem_add_ns", 00:18:45.901 "req_id": 1 00:18:45.901 } 00:18:45.901 Got JSON-RPC error response 00:18:45.901 response: 00:18:45.901 { 00:18:45.901 "code": -32602, 00:18:45.901 "message": "Invalid parameters" 00:18:45.901 } 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:45.901 Adding namespace failed - expected result. 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:45.901 test case2: host connect to nvmf target in multiple paths 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:45.901 [2024-07-14 01:04:35.125523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.901 01:04:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:46.472 01:04:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:47.041 01:04:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:47.041 01:04:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:18:47.041 01:04:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.041 01:04:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:47.041 01:04:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:18:49.578 01:04:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:49.578 01:04:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:49.578 01:04:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:49.578 01:04:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:49.578 01:04:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:49.578 01:04:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:18:49.578 01:04:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:49.578 [global] 00:18:49.578 thread=1 00:18:49.578 invalidate=1 00:18:49.578 rw=write 00:18:49.578 time_based=1 00:18:49.578 runtime=1 00:18:49.578 ioengine=libaio 00:18:49.578 direct=1 00:18:49.578 bs=4096 00:18:49.578 iodepth=1 00:18:49.578 norandommap=0 00:18:49.578 numjobs=1 00:18:49.578 00:18:49.578 verify_dump=1 00:18:49.578 verify_backlog=512 00:18:49.578 verify_state_save=0 00:18:49.578 do_verify=1 00:18:49.578 verify=crc32c-intel 00:18:49.578 [job0] 00:18:49.578 filename=/dev/nvme0n1 00:18:49.578 Could not set queue depth (nvme0n1) 00:18:49.578 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:49.578 fio-3.35 00:18:49.578 Starting 1 thread 00:18:50.519 00:18:50.519 job0: (groupid=0, jobs=1): err= 0: pid=1142153: Sun Jul 14 01:04:39 2024 00:18:50.519 read: IOPS=20, BW=81.6KiB/s (83.5kB/s)(84.0KiB/1030msec) 00:18:50.519 slat (nsec): min=12764, max=34528, avg=26002.62, stdev=8954.25 00:18:50.519 clat (usec): min=40896, max=42044, avg=41565.88, stdev=496.53 00:18:50.519 lat (usec): min=40930, max=42057, avg=41591.88, stdev=497.79 00:18:50.519 clat percentiles (usec): 00:18:50.519 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:50.519 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:18:50.519 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:50.519 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:50.519 | 99.99th=[42206] 00:18:50.519 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:18:50.519 slat (usec): min=7, max=32905, avg=76.09, stdev=1453.74 00:18:50.519 clat (usec): min=196, max=445, avg=225.39, stdev=18.47 00:18:50.519 lat (usec): min=204, max=33164, avg=301.48, stdev=1455.32 00:18:50.519 clat percentiles (usec): 00:18:50.519 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 212], 00:18:50.519 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 227], 00:18:50.519 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 255], 00:18:50.519 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 445], 99.95th=[ 445], 00:18:50.519 | 99.99th=[ 445] 00:18:50.519 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:50.519 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:50.519 lat (usec) : 250=89.49%, 500=6.57% 00:18:50.519 lat (msec) : 50=3.94% 00:18:50.519 cpu : usr=0.58%, sys=0.68%, ctx=535, majf=0, minf=2 00:18:50.519 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:50.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:50.519 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:50.519 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:50.519 00:18:50.519 Run status group 0 (all jobs): 00:18:50.519 READ: bw=81.6KiB/s (83.5kB/s), 81.6KiB/s-81.6KiB/s (83.5kB/s-83.5kB/s), io=84.0KiB (86.0kB), run=1030-1030msec 00:18:50.519 WRITE: bw=1988KiB/s (2036kB/s), 1988KiB/s-1988KiB/s (2036kB/s-2036kB/s), io=2048KiB (2097kB), run=1030-1030msec 00:18:50.519 00:18:50.519 Disk stats (read/write): 00:18:50.519 nvme0n1: ios=69/512, merge=0/0, ticks=921/102, in_queue=1023, util=98.80% 00:18:50.519 01:04:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:50.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:50.519 01:04:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:50.519 01:04:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:18:50.519 01:04:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:50.519 01:04:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.519 01:04:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:50.519 01:04:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.519 01:04:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:18:50.519 01:04:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:50.519 01:04:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:50.519 01:04:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:50.519 01:04:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:50.519 01:04:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:50.519 01:04:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:50.519 01:04:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:50.519 01:04:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:50.519 rmmod nvme_tcp 00:18:50.519 rmmod nvme_fabrics 00:18:50.519 rmmod nvme_keyring 00:18:50.777 01:04:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:50.777 01:04:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:50.777 01:04:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:50.777 01:04:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1141635 ']' 00:18:50.777 01:04:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1141635 00:18:50.777 01:04:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1141635 ']' 00:18:50.777 01:04:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1141635 00:18:50.777 01:04:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:18:50.777 01:04:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.777 01:04:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1141635 00:18:50.777 01:04:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:50.777 01:04:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:50.777 01:04:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1141635' 00:18:50.777 killing process with pid 1141635 00:18:50.777 01:04:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1141635 00:18:50.777 01:04:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1141635 00:18:51.037 01:04:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:51.037 01:04:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:51.037 01:04:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:51.037 01:04:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:51.037 01:04:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:51.037 01:04:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.037 01:04:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.037 01:04:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.944 01:04:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:52.944 00:18:52.944 real 0m9.854s 00:18:52.944 user 0m22.139s 00:18:52.944 sys 0m2.302s 00:18:52.944 01:04:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:52.944 01:04:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:52.944 ************************************ 00:18:52.944 END TEST nvmf_nmic 00:18:52.944 ************************************ 00:18:52.944 01:04:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:52.944 01:04:42 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:52.944 01:04:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:52.944 01:04:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:52.944 01:04:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:52.944 ************************************ 00:18:52.944 START TEST nvmf_fio_target 00:18:52.944 ************************************ 00:18:52.944 01:04:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:52.944 * Looking for test storage... 00:18:53.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:53.203 01:04:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:55.108 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:55.108 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:55.109 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:55.109 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:55.109 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:55.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:18:55.109 00:18:55.109 --- 10.0.0.2 ping statistics --- 00:18:55.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.109 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:55.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:18:55.109 00:18:55.109 --- 10.0.0.1 ping statistics --- 00:18:55.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.109 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1144254 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1144254 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1144254 ']' 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.109 01:04:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.367 [2024-07-14 01:04:44.536537] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:55.368 [2024-07-14 01:04:44.536627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.368 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.368 [2024-07-14 01:04:44.608123] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:55.368 [2024-07-14 01:04:44.697568] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.368 [2024-07-14 01:04:44.697638] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.368 [2024-07-14 01:04:44.697652] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.368 [2024-07-14 01:04:44.697663] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.368 [2024-07-14 01:04:44.697673] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.368 [2024-07-14 01:04:44.697818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.368 [2024-07-14 01:04:44.697888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.368 [2024-07-14 01:04:44.697949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:55.368 [2024-07-14 01:04:44.697952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.625 01:04:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:55.625 01:04:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:18:55.625 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:55.625 01:04:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:55.625 01:04:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.625 01:04:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.625 01:04:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:55.883 [2024-07-14 01:04:45.066258] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.883 01:04:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:56.141 01:04:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:56.141 01:04:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:56.399 01:04:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:56.399 01:04:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:56.657 01:04:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:56.657 01:04:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:56.915 01:04:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:56.915 01:04:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:57.174 01:04:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:57.432 01:04:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:57.432 01:04:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:57.690 01:04:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:57.690 01:04:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:57.948 01:04:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:57.948 01:04:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:58.206 01:04:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:58.464 01:04:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:58.465 01:04:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:58.723 01:04:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:58.723 01:04:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:58.983 01:04:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:58.983 [2024-07-14 01:04:48.390517] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.243 01:04:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:59.243 01:04:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:59.501 01:04:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:00.437 01:04:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:00.437 01:04:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:19:00.437 01:04:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:00.437 01:04:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:19:00.437 01:04:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:19:00.437 01:04:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:19:02.371 01:04:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:02.371 01:04:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:02.371 01:04:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:02.371 01:04:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:19:02.371 01:04:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:02.371 01:04:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:19:02.371 01:04:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:02.371 [global] 00:19:02.371 thread=1 00:19:02.371 invalidate=1 00:19:02.371 rw=write 00:19:02.371 time_based=1 00:19:02.371 runtime=1 00:19:02.371 ioengine=libaio 00:19:02.371 direct=1 00:19:02.371 bs=4096 00:19:02.371 iodepth=1 00:19:02.371 norandommap=0 00:19:02.371 numjobs=1 00:19:02.371 00:19:02.371 verify_dump=1 00:19:02.371 verify_backlog=512 00:19:02.371 verify_state_save=0 00:19:02.372 do_verify=1 00:19:02.372 verify=crc32c-intel 00:19:02.372 [job0] 00:19:02.372 filename=/dev/nvme0n1 00:19:02.372 [job1] 00:19:02.372 filename=/dev/nvme0n2 00:19:02.372 [job2] 00:19:02.372 filename=/dev/nvme0n3 00:19:02.372 [job3] 00:19:02.372 filename=/dev/nvme0n4 00:19:02.372 Could not set queue depth (nvme0n1) 00:19:02.372 Could not set queue depth (nvme0n2) 00:19:02.372 Could not set queue depth (nvme0n3) 00:19:02.372 Could not set queue depth (nvme0n4) 00:19:02.630 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.630 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.630 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.630 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.630 fio-3.35 00:19:02.630 Starting 4 threads 00:19:04.005 00:19:04.005 job0: (groupid=0, jobs=1): err= 0: pid=1145291: Sun Jul 14 01:04:53 2024 00:19:04.005 read: IOPS=27, BW=110KiB/s (113kB/s)(112KiB/1017msec) 00:19:04.005 slat (nsec): min=12704, max=35079, avg=20978.96, stdev=7723.90 00:19:04.005 clat (usec): min=497, max=42005, avg=29533.94, stdev=18616.74 00:19:04.005 lat (usec): min=517, max=42024, avg=29554.92, stdev=18613.12 00:19:04.005 clat percentiles (usec): 00:19:04.005 | 1.00th=[ 498], 5.00th=[ 578], 10.00th=[ 603], 20.00th=[ 644], 00:19:04.005 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:04.005 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:19:04.005 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:04.005 | 99.99th=[42206] 00:19:04.005 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:19:04.005 slat (nsec): min=7729, max=69343, avg=25661.35, stdev=13163.22 00:19:04.005 clat (usec): min=207, max=1159, avg=337.65, stdev=102.40 00:19:04.005 lat (usec): min=217, max=1170, avg=363.31, stdev=110.87 00:19:04.005 clat percentiles (usec): 00:19:04.005 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 233], 20.00th=[ 245], 00:19:04.005 | 30.00th=[ 255], 40.00th=[ 273], 50.00th=[ 314], 60.00th=[ 363], 00:19:04.005 | 70.00th=[ 408], 80.00th=[ 429], 90.00th=[ 465], 95.00th=[ 494], 00:19:04.005 | 99.00th=[ 562], 99.50th=[ 693], 99.90th=[ 1156], 99.95th=[ 1156], 00:19:04.005 | 99.99th=[ 1156] 00:19:04.005 bw ( KiB/s): min= 4096, max= 4096, per=36.39%, avg=4096.00, stdev= 0.00, samples=1 00:19:04.005 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:04.005 lat (usec) : 250=22.59%, 500=68.52%, 750=4.63%, 1000=0.37% 00:19:04.005 lat (msec) : 2=0.19%, 50=3.70% 00:19:04.005 cpu : usr=0.79%, sys=1.67%, ctx=542, majf=0, minf=1 00:19:04.005 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:04.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.005 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.005 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:04.005 job1: (groupid=0, jobs=1): err= 0: pid=1145292: Sun Jul 14 01:04:53 2024 00:19:04.005 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:04.005 slat (nsec): min=9034, max=58831, avg=16941.31, stdev=8511.57 00:19:04.005 clat (usec): min=432, max=727, avg=502.77, stdev=36.35 00:19:04.005 lat (usec): min=445, max=743, avg=519.72, stdev=42.64 00:19:04.005 clat percentiles (usec): 00:19:04.005 | 1.00th=[ 445], 5.00th=[ 465], 10.00th=[ 469], 20.00th=[ 478], 00:19:04.005 | 30.00th=[ 482], 40.00th=[ 486], 50.00th=[ 494], 60.00th=[ 502], 00:19:04.005 | 70.00th=[ 510], 80.00th=[ 523], 90.00th=[ 553], 95.00th=[ 578], 00:19:04.005 | 99.00th=[ 603], 99.50th=[ 635], 99.90th=[ 725], 99.95th=[ 725], 00:19:04.005 | 99.99th=[ 725] 00:19:04.005 write: IOPS=1377, BW=5510KiB/s (5643kB/s)(5516KiB/1001msec); 0 zone resets 00:19:04.005 slat (nsec): min=10429, max=71571, avg=20561.92, stdev=11890.05 00:19:04.005 clat (usec): min=212, max=2087, avg=310.13, stdev=142.64 00:19:04.005 lat (usec): min=223, max=2116, avg=330.69, stdev=150.68 00:19:04.005 clat percentiles (usec): 00:19:04.005 | 1.00th=[ 219], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 227], 00:19:04.005 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:19:04.005 | 70.00th=[ 285], 80.00th=[ 424], 90.00th=[ 523], 95.00th=[ 611], 00:19:04.005 | 99.00th=[ 750], 99.50th=[ 816], 99.90th=[ 1237], 99.95th=[ 2089], 00:19:04.005 | 99.99th=[ 2089] 00:19:04.005 bw ( KiB/s): min= 4096, max= 4096, per=36.39%, avg=4096.00, stdev= 0.00, samples=1 00:19:04.005 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:04.005 lat (usec) : 250=37.29%, 500=37.00%, 750=25.18%, 1000=0.46% 00:19:04.005 lat (msec) : 2=0.04%, 4=0.04% 00:19:04.006 cpu : usr=2.40%, sys=4.60%, ctx=2404, majf=0, minf=1 00:19:04.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:04.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.006 issued rwts: total=1024,1379,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:04.006 job2: (groupid=0, jobs=1): err= 0: pid=1145293: Sun Jul 14 01:04:53 2024 00:19:04.006 read: IOPS=19, BW=77.2KiB/s (79.1kB/s)(80.0KiB/1036msec) 00:19:04.006 slat (nsec): min=13207, max=35651, avg=18510.70, stdev=6281.75 00:19:04.006 clat (usec): min=40926, max=41976, avg=41091.50, stdev=308.16 00:19:04.006 lat (usec): min=40962, max=41992, avg=41110.01, stdev=308.49 00:19:04.006 clat percentiles (usec): 00:19:04.006 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:04.006 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:04.006 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:19:04.006 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:04.006 | 99.99th=[42206] 00:19:04.006 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:19:04.006 slat (nsec): min=7673, max=77202, avg=29007.30, stdev=11579.79 00:19:04.006 clat (usec): min=245, max=1270, avg=381.12, stdev=85.99 00:19:04.006 lat (usec): min=268, max=1309, avg=410.12, stdev=89.57 00:19:04.006 clat percentiles (usec): 00:19:04.006 | 1.00th=[ 260], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 306], 00:19:04.006 | 30.00th=[ 338], 40.00th=[ 363], 50.00th=[ 379], 60.00th=[ 404], 00:19:04.006 | 70.00th=[ 420], 80.00th=[ 433], 90.00th=[ 461], 95.00th=[ 482], 00:19:04.006 | 99.00th=[ 529], 99.50th=[ 791], 99.90th=[ 1270], 99.95th=[ 1270], 00:19:04.006 | 99.99th=[ 1270] 00:19:04.006 bw ( KiB/s): min= 4096, max= 4096, per=36.39%, avg=4096.00, stdev= 0.00, samples=1 00:19:04.006 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:04.006 lat (usec) : 250=0.19%, 500=94.17%, 750=1.32%, 1000=0.19% 00:19:04.006 lat (msec) : 2=0.38%, 50=3.76% 00:19:04.006 cpu : usr=1.45%, sys=1.16%, ctx=533, majf=0, minf=2 00:19:04.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:04.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.006 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:04.006 job3: (groupid=0, jobs=1): err= 0: pid=1145294: Sun Jul 14 01:04:53 2024 00:19:04.006 read: IOPS=20, BW=82.8KiB/s (84.8kB/s)(84.0KiB/1014msec) 00:19:04.006 slat (nsec): min=13017, max=36913, avg=20012.57, stdev=8223.43 00:19:04.006 clat (usec): min=40697, max=42054, avg=41105.25, stdev=383.64 00:19:04.006 lat (usec): min=40730, max=42067, avg=41125.26, stdev=380.86 00:19:04.006 clat percentiles (usec): 00:19:04.006 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:04.006 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:04.006 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:19:04.006 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:04.006 | 99.99th=[42206] 00:19:04.006 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:19:04.006 slat (nsec): min=7758, max=51342, avg=20394.95, stdev=8275.07 00:19:04.006 clat (usec): min=209, max=505, avg=267.89, stdev=39.35 00:19:04.006 lat (usec): min=218, max=514, avg=288.29, stdev=41.63 00:19:04.006 clat percentiles (usec): 00:19:04.006 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 237], 20.00th=[ 245], 00:19:04.006 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:19:04.006 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 322], 95.00th=[ 351], 00:19:04.006 | 99.00th=[ 429], 99.50th=[ 441], 99.90th=[ 506], 99.95th=[ 506], 00:19:04.006 | 99.99th=[ 506] 00:19:04.006 bw ( KiB/s): min= 4096, max= 4096, per=36.39%, avg=4096.00, stdev= 0.00, samples=1 00:19:04.006 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:04.006 lat (usec) : 250=31.14%, 500=64.73%, 750=0.19% 00:19:04.006 lat (msec) : 50=3.94% 00:19:04.006 cpu : usr=1.18%, sys=0.79%, ctx=534, majf=0, minf=1 00:19:04.006 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:04.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.006 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.006 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:04.006 00:19:04.006 Run status group 0 (all jobs): 00:19:04.006 READ: bw=4220KiB/s (4321kB/s), 77.2KiB/s-4092KiB/s (79.1kB/s-4190kB/s), io=4372KiB (4477kB), run=1001-1036msec 00:19:04.006 WRITE: bw=11.0MiB/s (11.5MB/s), 1977KiB/s-5510KiB/s (2024kB/s-5643kB/s), io=11.4MiB (11.9MB), run=1001-1036msec 00:19:04.006 00:19:04.006 Disk stats (read/write): 00:19:04.006 nvme0n1: ios=46/512, merge=0/0, ticks=1485/161, in_queue=1646, util=85.67% 00:19:04.006 nvme0n2: ios=943/1024, merge=0/0, ticks=534/327, in_queue=861, util=91.25% 00:19:04.006 nvme0n3: ios=78/512, merge=0/0, ticks=727/182, in_queue=909, util=95.19% 00:19:04.006 nvme0n4: ios=40/512, merge=0/0, ticks=1603/122, in_queue=1725, util=94.21% 00:19:04.006 01:04:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:04.006 [global] 00:19:04.006 thread=1 00:19:04.006 invalidate=1 00:19:04.006 rw=randwrite 00:19:04.006 time_based=1 00:19:04.006 runtime=1 00:19:04.006 ioengine=libaio 00:19:04.006 direct=1 00:19:04.006 bs=4096 00:19:04.006 iodepth=1 00:19:04.006 norandommap=0 00:19:04.006 numjobs=1 00:19:04.006 00:19:04.006 verify_dump=1 00:19:04.006 verify_backlog=512 00:19:04.006 verify_state_save=0 00:19:04.006 do_verify=1 00:19:04.006 verify=crc32c-intel 00:19:04.006 [job0] 00:19:04.006 filename=/dev/nvme0n1 00:19:04.006 [job1] 00:19:04.006 filename=/dev/nvme0n2 00:19:04.006 [job2] 00:19:04.006 filename=/dev/nvme0n3 00:19:04.006 [job3] 00:19:04.006 filename=/dev/nvme0n4 00:19:04.006 Could not set queue depth (nvme0n1) 00:19:04.006 Could not set queue depth (nvme0n2) 00:19:04.006 Could not set queue depth (nvme0n3) 00:19:04.006 Could not set queue depth (nvme0n4) 00:19:04.006 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.006 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.006 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.006 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.006 fio-3.35 00:19:04.006 Starting 4 threads 00:19:05.381 00:19:05.381 job0: (groupid=0, jobs=1): err= 0: pid=1145518: Sun Jul 14 01:04:54 2024 00:19:05.381 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:05.382 slat (nsec): min=5036, max=64268, avg=20541.51, stdev=9840.22 00:19:05.382 clat (usec): min=315, max=4130, avg=512.20, stdev=148.41 00:19:05.382 lat (usec): min=330, max=4169, avg=532.74, stdev=150.49 00:19:05.382 clat percentiles (usec): 00:19:05.382 | 1.00th=[ 330], 5.00th=[ 363], 10.00th=[ 392], 20.00th=[ 433], 00:19:05.382 | 30.00th=[ 474], 40.00th=[ 490], 50.00th=[ 498], 60.00th=[ 515], 00:19:05.382 | 70.00th=[ 537], 80.00th=[ 570], 90.00th=[ 652], 95.00th=[ 693], 00:19:05.382 | 99.00th=[ 791], 99.50th=[ 898], 99.90th=[ 1012], 99.95th=[ 4146], 00:19:05.382 | 99.99th=[ 4146] 00:19:05.382 write: IOPS=1231, BW=4927KiB/s (5045kB/s)(4932KiB/1001msec); 0 zone resets 00:19:05.382 slat (nsec): min=6554, max=68459, avg=17887.92, stdev=9503.98 00:19:05.382 clat (usec): min=187, max=658, avg=340.99, stdev=99.36 00:19:05.382 lat (usec): min=196, max=723, avg=358.88, stdev=102.85 00:19:05.382 clat percentiles (usec): 00:19:05.382 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 235], 00:19:05.382 | 30.00th=[ 262], 40.00th=[ 293], 50.00th=[ 338], 60.00th=[ 375], 00:19:05.382 | 70.00th=[ 404], 80.00th=[ 433], 90.00th=[ 474], 95.00th=[ 515], 00:19:05.382 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 619], 99.95th=[ 660], 00:19:05.382 | 99.99th=[ 660] 00:19:05.382 bw ( KiB/s): min= 4272, max= 4272, per=22.86%, avg=4272.00, stdev= 0.00, samples=1 00:19:05.382 iops : min= 1068, max= 1068, avg=1068.00, stdev= 0.00, samples=1 00:19:05.382 lat (usec) : 250=15.15%, 500=58.75%, 750=25.12%, 1000=0.89% 00:19:05.382 lat (msec) : 2=0.04%, 10=0.04% 00:19:05.382 cpu : usr=2.90%, sys=3.80%, ctx=2259, majf=0, minf=1 00:19:05.382 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.382 issued rwts: total=1024,1233,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.382 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:05.382 job1: (groupid=0, jobs=1): err= 0: pid=1145519: Sun Jul 14 01:04:54 2024 00:19:05.382 read: IOPS=1360, BW=5443KiB/s (5573kB/s)(5448KiB/1001msec) 00:19:05.382 slat (nsec): min=5537, max=47937, avg=12569.57, stdev=6268.42 00:19:05.382 clat (usec): min=307, max=721, avg=405.99, stdev=70.22 00:19:05.382 lat (usec): min=313, max=728, avg=418.56, stdev=71.63 00:19:05.382 clat percentiles (usec): 00:19:05.382 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 343], 00:19:05.382 | 30.00th=[ 355], 40.00th=[ 367], 50.00th=[ 383], 60.00th=[ 412], 00:19:05.382 | 70.00th=[ 445], 80.00th=[ 469], 90.00th=[ 502], 95.00th=[ 529], 00:19:05.382 | 99.00th=[ 619], 99.50th=[ 644], 99.90th=[ 693], 99.95th=[ 725], 00:19:05.382 | 99.99th=[ 725] 00:19:05.382 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:05.382 slat (nsec): min=6858, max=54060, avg=13106.91, stdev=6458.93 00:19:05.382 clat (usec): min=187, max=1010, avg=259.38, stdev=67.05 00:19:05.382 lat (usec): min=195, max=1020, avg=272.48, stdev=69.06 00:19:05.382 clat percentiles (usec): 00:19:05.382 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:19:05.382 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 237], 60.00th=[ 249], 00:19:05.382 | 70.00th=[ 273], 80.00th=[ 293], 90.00th=[ 355], 95.00th=[ 400], 00:19:05.382 | 99.00th=[ 486], 99.50th=[ 553], 99.90th=[ 750], 99.95th=[ 1012], 00:19:05.382 | 99.99th=[ 1012] 00:19:05.382 bw ( KiB/s): min= 7184, max= 7184, per=38.44%, avg=7184.00, stdev= 0.00, samples=1 00:19:05.382 iops : min= 1796, max= 1796, avg=1796.00, stdev= 0.00, samples=1 00:19:05.382 lat (usec) : 250=31.92%, 500=62.49%, 750=5.52%, 1000=0.03% 00:19:05.382 lat (msec) : 2=0.03% 00:19:05.382 cpu : usr=3.10%, sys=5.00%, ctx=2899, majf=0, minf=2 00:19:05.382 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.382 issued rwts: total=1362,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.382 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:05.382 job2: (groupid=0, jobs=1): err= 0: pid=1145520: Sun Jul 14 01:04:54 2024 00:19:05.382 read: IOPS=20, BW=82.0KiB/s (83.9kB/s)(84.0KiB/1025msec) 00:19:05.382 slat (nsec): min=10252, max=41702, avg=17014.67, stdev=7401.33 00:19:05.382 clat (usec): min=40883, max=41997, avg=41168.62, stdev=395.58 00:19:05.382 lat (usec): min=40918, max=42012, avg=41185.63, stdev=393.32 00:19:05.382 clat percentiles (usec): 00:19:05.382 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:05.382 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:05.382 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:19:05.382 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:05.382 | 99.99th=[42206] 00:19:05.382 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:19:05.382 slat (nsec): min=8238, max=55825, avg=14763.65, stdev=6949.33 00:19:05.382 clat (usec): min=218, max=452, avg=293.54, stdev=42.08 00:19:05.382 lat (usec): min=236, max=487, avg=308.30, stdev=44.27 00:19:05.382 clat percentiles (usec): 00:19:05.382 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 247], 20.00th=[ 260], 00:19:05.382 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:19:05.382 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[ 351], 95.00th=[ 375], 00:19:05.382 | 99.00th=[ 416], 99.50th=[ 449], 99.90th=[ 453], 99.95th=[ 453], 00:19:05.382 | 99.99th=[ 453] 00:19:05.382 bw ( KiB/s): min= 4096, max= 4096, per=21.92%, avg=4096.00, stdev= 0.00, samples=1 00:19:05.382 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:05.382 lat (usec) : 250=11.63%, 500=84.43% 00:19:05.382 lat (msec) : 50=3.94% 00:19:05.382 cpu : usr=0.88%, sys=0.49%, ctx=534, majf=0, minf=1 00:19:05.382 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.382 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.382 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:05.382 job3: (groupid=0, jobs=1): err= 0: pid=1145521: Sun Jul 14 01:04:54 2024 00:19:05.382 read: IOPS=994, BW=3977KiB/s (4072kB/s)(4100KiB/1031msec) 00:19:05.382 slat (nsec): min=5132, max=72774, avg=18369.79, stdev=10578.36 00:19:05.382 clat (usec): min=318, max=41348, avg=536.90, stdev=1792.38 00:19:05.382 lat (usec): min=324, max=41363, avg=555.27, stdev=1792.37 00:19:05.382 clat percentiles (usec): 00:19:05.382 | 1.00th=[ 334], 5.00th=[ 347], 10.00th=[ 359], 20.00th=[ 379], 00:19:05.382 | 30.00th=[ 412], 40.00th=[ 441], 50.00th=[ 457], 60.00th=[ 474], 00:19:05.382 | 70.00th=[ 494], 80.00th=[ 519], 90.00th=[ 562], 95.00th=[ 594], 00:19:05.382 | 99.00th=[ 668], 99.50th=[ 685], 99.90th=[40633], 99.95th=[41157], 00:19:05.382 | 99.99th=[41157] 00:19:05.382 write: IOPS=1489, BW=5959KiB/s (6102kB/s)(6144KiB/1031msec); 0 zone resets 00:19:05.382 slat (nsec): min=5671, max=53506, avg=13309.63, stdev=5760.71 00:19:05.382 clat (usec): min=194, max=1438, avg=279.83, stdev=68.45 00:19:05.382 lat (usec): min=201, max=1451, avg=293.14, stdev=69.78 00:19:05.382 clat percentiles (usec): 00:19:05.382 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 221], 00:19:05.382 | 30.00th=[ 233], 40.00th=[ 251], 50.00th=[ 265], 60.00th=[ 289], 00:19:05.382 | 70.00th=[ 306], 80.00th=[ 326], 90.00th=[ 363], 95.00th=[ 396], 00:19:05.382 | 99.00th=[ 474], 99.50th=[ 494], 99.90th=[ 562], 99.95th=[ 1434], 00:19:05.382 | 99.99th=[ 1434] 00:19:05.382 bw ( KiB/s): min= 6144, max= 6144, per=32.88%, avg=6144.00, stdev= 0.00, samples=2 00:19:05.382 iops : min= 1536, max= 1536, avg=1536.00, stdev= 0.00, samples=2 00:19:05.382 lat (usec) : 250=23.47%, 500=65.68%, 750=10.74% 00:19:05.382 lat (msec) : 2=0.04%, 50=0.08% 00:19:05.382 cpu : usr=2.62%, sys=4.08%, ctx=2561, majf=0, minf=1 00:19:05.382 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:05.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.383 issued rwts: total=1025,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.383 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:05.383 00:19:05.383 Run status group 0 (all jobs): 00:19:05.383 READ: bw=13.0MiB/s (13.6MB/s), 82.0KiB/s-5443KiB/s (83.9kB/s-5573kB/s), io=13.4MiB (14.1MB), run=1001-1031msec 00:19:05.383 WRITE: bw=18.2MiB/s (19.1MB/s), 1998KiB/s-6138KiB/s (2046kB/s-6285kB/s), io=18.8MiB (19.7MB), run=1001-1031msec 00:19:05.383 00:19:05.383 Disk stats (read/write): 00:19:05.383 nvme0n1: ios=902/1024, merge=0/0, ticks=534/344, in_queue=878, util=89.98% 00:19:05.383 nvme0n2: ios=1074/1494, merge=0/0, ticks=482/362, in_queue=844, util=91.68% 00:19:05.383 nvme0n3: ios=65/512, merge=0/0, ticks=1637/140, in_queue=1777, util=96.87% 00:19:05.383 nvme0n4: ios=1081/1088, merge=0/0, ticks=567/293, in_queue=860, util=95.80% 00:19:05.383 01:04:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:05.383 [global] 00:19:05.383 thread=1 00:19:05.383 invalidate=1 00:19:05.383 rw=write 00:19:05.383 time_based=1 00:19:05.383 runtime=1 00:19:05.383 ioengine=libaio 00:19:05.383 direct=1 00:19:05.383 bs=4096 00:19:05.383 iodepth=128 00:19:05.383 norandommap=0 00:19:05.383 numjobs=1 00:19:05.383 00:19:05.383 verify_dump=1 00:19:05.383 verify_backlog=512 00:19:05.383 verify_state_save=0 00:19:05.383 do_verify=1 00:19:05.383 verify=crc32c-intel 00:19:05.383 [job0] 00:19:05.383 filename=/dev/nvme0n1 00:19:05.383 [job1] 00:19:05.383 filename=/dev/nvme0n2 00:19:05.383 [job2] 00:19:05.383 filename=/dev/nvme0n3 00:19:05.383 [job3] 00:19:05.383 filename=/dev/nvme0n4 00:19:05.383 Could not set queue depth (nvme0n1) 00:19:05.383 Could not set queue depth (nvme0n2) 00:19:05.383 Could not set queue depth (nvme0n3) 00:19:05.383 Could not set queue depth (nvme0n4) 00:19:05.383 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:05.383 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:05.383 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:05.383 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:05.383 fio-3.35 00:19:05.383 Starting 4 threads 00:19:06.759 00:19:06.759 job0: (groupid=0, jobs=1): err= 0: pid=1145871: Sun Jul 14 01:04:55 2024 00:19:06.759 read: IOPS=5147, BW=20.1MiB/s (21.1MB/s)(20.1MiB/1002msec) 00:19:06.759 slat (usec): min=2, max=25356, avg=93.57, stdev=682.24 00:19:06.759 clat (usec): min=1234, max=55603, avg=12397.36, stdev=6187.41 00:19:06.759 lat (usec): min=1247, max=55644, avg=12490.93, stdev=6220.74 00:19:06.759 clat percentiles (usec): 00:19:06.759 | 1.00th=[ 6390], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10290], 00:19:06.759 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11469], 00:19:06.759 | 70.00th=[11994], 80.00th=[12780], 90.00th=[14484], 95.00th=[17695], 00:19:06.759 | 99.00th=[52167], 99.50th=[54264], 99.90th=[55837], 99.95th=[55837], 00:19:06.759 | 99.99th=[55837] 00:19:06.759 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:19:06.759 slat (usec): min=3, max=33475, avg=76.02, stdev=576.86 00:19:06.759 clat (usec): min=697, max=55564, avg=10455.17, stdev=4154.05 00:19:06.759 lat (usec): min=724, max=55570, avg=10531.19, stdev=4171.08 00:19:06.759 clat percentiles (usec): 00:19:06.759 | 1.00th=[ 2900], 5.00th=[ 5145], 10.00th=[ 6456], 20.00th=[ 7570], 00:19:06.759 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[10683], 60.00th=[10945], 00:19:06.759 | 70.00th=[11338], 80.00th=[11731], 90.00th=[13435], 95.00th=[14746], 00:19:06.759 | 99.00th=[30278], 99.50th=[40109], 99.90th=[42730], 99.95th=[42730], 00:19:06.759 | 99.99th=[55313] 00:19:06.759 bw ( KiB/s): min=20480, max=23864, per=32.97%, avg=22172.00, stdev=2392.85, samples=2 00:19:06.759 iops : min= 5120, max= 5966, avg=5543.00, stdev=598.21, samples=2 00:19:06.759 lat (usec) : 750=0.02% 00:19:06.759 lat (msec) : 2=0.10%, 4=1.10%, 10=24.81%, 20=71.85%, 50=1.24% 00:19:06.759 lat (msec) : 100=0.87% 00:19:06.759 cpu : usr=5.79%, sys=10.49%, ctx=517, majf=0, minf=1 00:19:06.759 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:06.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.759 issued rwts: total=5158,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.759 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.759 job1: (groupid=0, jobs=1): err= 0: pid=1145872: Sun Jul 14 01:04:55 2024 00:19:06.759 read: IOPS=5980, BW=23.4MiB/s (24.5MB/s)(23.5MiB/1005msec) 00:19:06.759 slat (usec): min=2, max=9490, avg=82.56, stdev=543.23 00:19:06.759 clat (usec): min=1236, max=25584, avg=11459.61, stdev=2579.65 00:19:06.759 lat (usec): min=1258, max=25593, avg=11542.17, stdev=2604.52 00:19:06.759 clat percentiles (usec): 00:19:06.759 | 1.00th=[ 7111], 5.00th=[ 8291], 10.00th=[ 8979], 20.00th=[ 9765], 00:19:06.759 | 30.00th=[10159], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:19:06.759 | 70.00th=[11863], 80.00th=[12911], 90.00th=[14746], 95.00th=[17171], 00:19:06.759 | 99.00th=[20055], 99.50th=[20579], 99.90th=[24511], 99.95th=[24511], 00:19:06.759 | 99.99th=[25560] 00:19:06.759 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:19:06.759 slat (usec): min=4, max=8424, avg=69.63, stdev=419.66 00:19:06.759 clat (usec): min=1615, max=25583, avg=9534.15, stdev=2880.98 00:19:06.759 lat (usec): min=1628, max=25593, avg=9603.78, stdev=2881.84 00:19:06.759 clat percentiles (usec): 00:19:06.759 | 1.00th=[ 3621], 5.00th=[ 5211], 10.00th=[ 6128], 20.00th=[ 7242], 00:19:06.759 | 30.00th=[ 7898], 40.00th=[ 8717], 50.00th=[ 9896], 60.00th=[10421], 00:19:06.759 | 70.00th=[10945], 80.00th=[11338], 90.00th=[12387], 95.00th=[13304], 00:19:06.759 | 99.00th=[20841], 99.50th=[21890], 99.90th=[22676], 99.95th=[25560], 00:19:06.759 | 99.99th=[25560] 00:19:06.759 bw ( KiB/s): min=24576, max=24576, per=36.55%, avg=24576.00, stdev= 0.00, samples=2 00:19:06.759 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:19:06.759 lat (msec) : 2=0.04%, 4=0.91%, 10=37.26%, 20=60.69%, 50=1.09% 00:19:06.759 cpu : usr=8.07%, sys=12.95%, ctx=484, majf=0, minf=1 00:19:06.759 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:06.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.759 issued rwts: total=6010,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.759 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.759 job2: (groupid=0, jobs=1): err= 0: pid=1145873: Sun Jul 14 01:04:55 2024 00:19:06.759 read: IOPS=3003, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1004msec) 00:19:06.759 slat (usec): min=3, max=14900, avg=166.15, stdev=1020.15 00:19:06.759 clat (usec): min=2942, max=51031, avg=20794.92, stdev=5350.55 00:19:06.759 lat (usec): min=2957, max=51038, avg=20961.07, stdev=5435.32 00:19:06.759 clat percentiles (usec): 00:19:06.759 | 1.00th=[10028], 5.00th=[14353], 10.00th=[16057], 20.00th=[16909], 00:19:06.759 | 30.00th=[17433], 40.00th=[19006], 50.00th=[19792], 60.00th=[20317], 00:19:06.759 | 70.00th=[22152], 80.00th=[24773], 90.00th=[27919], 95.00th=[29754], 00:19:06.759 | 99.00th=[38536], 99.50th=[39060], 99.90th=[49021], 99.95th=[51119], 00:19:06.759 | 99.99th=[51119] 00:19:06.759 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:19:06.759 slat (usec): min=4, max=17358, avg=151.81, stdev=987.20 00:19:06.759 clat (usec): min=9055, max=59014, avg=20847.64, stdev=8336.83 00:19:06.759 lat (usec): min=9075, max=59022, avg=20999.46, stdev=8415.25 00:19:06.759 clat percentiles (usec): 00:19:06.759 | 1.00th=[10683], 5.00th=[14222], 10.00th=[14746], 20.00th=[15270], 00:19:06.759 | 30.00th=[16057], 40.00th=[16450], 50.00th=[17171], 60.00th=[19268], 00:19:06.759 | 70.00th=[20841], 80.00th=[25560], 90.00th=[29754], 95.00th=[35914], 00:19:06.759 | 99.00th=[55313], 99.50th=[55837], 99.90th=[58983], 99.95th=[58983], 00:19:06.759 | 99.99th=[58983] 00:19:06.759 bw ( KiB/s): min=12112, max=12464, per=18.27%, avg=12288.00, stdev=248.90, samples=2 00:19:06.759 iops : min= 3028, max= 3116, avg=3072.00, stdev=62.23, samples=2 00:19:06.759 lat (msec) : 4=0.08%, 10=0.53%, 20=60.17%, 50=37.89%, 100=1.33% 00:19:06.759 cpu : usr=4.99%, sys=5.68%, ctx=198, majf=0, minf=1 00:19:06.759 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:06.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.759 issued rwts: total=3016,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.759 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.759 job3: (groupid=0, jobs=1): err= 0: pid=1145874: Sun Jul 14 01:04:55 2024 00:19:06.759 read: IOPS=1948, BW=7793KiB/s (7980kB/s)(7824KiB/1004msec) 00:19:06.759 slat (usec): min=3, max=16760, avg=264.50, stdev=1488.68 00:19:06.759 clat (usec): min=970, max=57829, avg=32868.93, stdev=8718.65 00:19:06.759 lat (usec): min=9356, max=57848, avg=33133.43, stdev=8806.13 00:19:06.759 clat percentiles (usec): 00:19:06.759 | 1.00th=[ 9634], 5.00th=[17695], 10.00th=[19792], 20.00th=[24249], 00:19:06.759 | 30.00th=[29754], 40.00th=[32113], 50.00th=[34866], 60.00th=[36439], 00:19:06.759 | 70.00th=[37487], 80.00th=[40109], 90.00th=[42730], 95.00th=[45351], 00:19:06.759 | 99.00th=[49546], 99.50th=[49546], 99.90th=[57410], 99.95th=[57934], 00:19:06.759 | 99.99th=[57934] 00:19:06.759 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:19:06.759 slat (usec): min=4, max=17476, avg=225.47, stdev=1210.72 00:19:06.759 clat (usec): min=10563, max=65750, avg=30474.18, stdev=13144.05 00:19:06.759 lat (usec): min=10901, max=65792, avg=30699.65, stdev=13259.35 00:19:06.759 clat percentiles (usec): 00:19:06.759 | 1.00th=[11338], 5.00th=[12387], 10.00th=[13042], 20.00th=[19268], 00:19:06.759 | 30.00th=[21103], 40.00th=[25297], 50.00th=[28443], 60.00th=[32113], 00:19:06.759 | 70.00th=[35914], 80.00th=[43254], 90.00th=[46924], 95.00th=[55313], 00:19:06.759 | 99.00th=[61080], 99.50th=[61604], 99.90th=[65799], 99.95th=[65799], 00:19:06.759 | 99.99th=[65799] 00:19:06.759 bw ( KiB/s): min= 7392, max= 8992, per=12.18%, avg=8192.00, stdev=1131.37, samples=2 00:19:06.759 iops : min= 1848, max= 2248, avg=2048.00, stdev=282.84, samples=2 00:19:06.759 lat (usec) : 1000=0.02% 00:19:06.759 lat (msec) : 10=0.85%, 20=14.91%, 50=80.42%, 100=3.80% 00:19:06.759 cpu : usr=2.39%, sys=5.58%, ctx=217, majf=0, minf=1 00:19:06.759 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:06.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.759 issued rwts: total=1956,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.759 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.759 00:19:06.759 Run status group 0 (all jobs): 00:19:06.759 READ: bw=62.7MiB/s (65.8MB/s), 7793KiB/s-23.4MiB/s (7980kB/s-24.5MB/s), io=63.0MiB (66.1MB), run=1002-1005msec 00:19:06.759 WRITE: bw=65.7MiB/s (68.9MB/s), 8159KiB/s-23.9MiB/s (8355kB/s-25.0MB/s), io=66.0MiB (69.2MB), run=1002-1005msec 00:19:06.759 00:19:06.759 Disk stats (read/write): 00:19:06.759 nvme0n1: ios=4369/4608, merge=0/0, ticks=42809/34938, in_queue=77747, util=98.10% 00:19:06.759 nvme0n2: ios=5131/5127, merge=0/0, ticks=53367/46542, in_queue=99909, util=97.76% 00:19:06.759 nvme0n3: ios=2517/2560, merge=0/0, ticks=27640/24999, in_queue=52639, util=97.70% 00:19:06.759 nvme0n4: ios=1555/1911, merge=0/0, ticks=17431/17785, in_queue=35216, util=98.32% 00:19:06.759 01:04:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:06.759 [global] 00:19:06.759 thread=1 00:19:06.759 invalidate=1 00:19:06.759 rw=randwrite 00:19:06.759 time_based=1 00:19:06.759 runtime=1 00:19:06.759 ioengine=libaio 00:19:06.759 direct=1 00:19:06.759 bs=4096 00:19:06.759 iodepth=128 00:19:06.759 norandommap=0 00:19:06.759 numjobs=1 00:19:06.759 00:19:06.759 verify_dump=1 00:19:06.759 verify_backlog=512 00:19:06.759 verify_state_save=0 00:19:06.759 do_verify=1 00:19:06.759 verify=crc32c-intel 00:19:06.759 [job0] 00:19:06.759 filename=/dev/nvme0n1 00:19:06.759 [job1] 00:19:06.759 filename=/dev/nvme0n2 00:19:06.759 [job2] 00:19:06.759 filename=/dev/nvme0n3 00:19:06.759 [job3] 00:19:06.759 filename=/dev/nvme0n4 00:19:06.759 Could not set queue depth (nvme0n1) 00:19:06.759 Could not set queue depth (nvme0n2) 00:19:06.759 Could not set queue depth (nvme0n3) 00:19:06.759 Could not set queue depth (nvme0n4) 00:19:06.759 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:06.759 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:06.759 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:06.760 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:06.760 fio-3.35 00:19:06.760 Starting 4 threads 00:19:08.139 00:19:08.139 job0: (groupid=0, jobs=1): err= 0: pid=1146098: Sun Jul 14 01:04:57 2024 00:19:08.139 read: IOPS=3176, BW=12.4MiB/s (13.0MB/s)(13.1MiB/1053msec) 00:19:08.139 slat (usec): min=2, max=25510, avg=133.11, stdev=1028.06 00:19:08.139 clat (usec): min=598, max=65245, avg=18581.15, stdev=11793.09 00:19:08.139 lat (usec): min=631, max=65253, avg=18714.26, stdev=11836.45 00:19:08.139 clat percentiles (usec): 00:19:08.139 | 1.00th=[ 1565], 5.00th=[ 4555], 10.00th=[ 9241], 20.00th=[10814], 00:19:08.139 | 30.00th=[11994], 40.00th=[14091], 50.00th=[15139], 60.00th=[17433], 00:19:08.139 | 70.00th=[19006], 80.00th=[23987], 90.00th=[32637], 95.00th=[46924], 00:19:08.139 | 99.00th=[64750], 99.50th=[65274], 99.90th=[65274], 99.95th=[65274], 00:19:08.139 | 99.99th=[65274] 00:19:08.139 write: IOPS=3403, BW=13.3MiB/s (13.9MB/s)(14.0MiB/1053msec); 0 zone resets 00:19:08.139 slat (usec): min=3, max=21542, avg=138.23, stdev=971.88 00:19:08.139 clat (usec): min=2781, max=50367, avg=19946.28, stdev=9040.04 00:19:08.139 lat (usec): min=2816, max=50372, avg=20084.51, stdev=9096.76 00:19:08.139 clat percentiles (usec): 00:19:08.139 | 1.00th=[ 4113], 5.00th=[ 6718], 10.00th=[ 8717], 20.00th=[10814], 00:19:08.139 | 30.00th=[13304], 40.00th=[16909], 50.00th=[20841], 60.00th=[22676], 00:19:08.139 | 70.00th=[23987], 80.00th=[26870], 90.00th=[29754], 95.00th=[39584], 00:19:08.139 | 99.00th=[43254], 99.50th=[45876], 99.90th=[49546], 99.95th=[50070], 00:19:08.139 | 99.99th=[50594] 00:19:08.139 bw ( KiB/s): min=12288, max=16384, per=24.29%, avg=14336.00, stdev=2896.31, samples=2 00:19:08.139 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:19:08.139 lat (usec) : 750=0.03%, 1000=0.01% 00:19:08.139 lat (msec) : 2=0.49%, 4=1.73%, 10=13.51%, 20=45.04%, 50=37.26% 00:19:08.139 lat (msec) : 100=1.92% 00:19:08.139 cpu : usr=2.57%, sys=5.04%, ctx=312, majf=0, minf=1 00:19:08.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:08.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.139 issued rwts: total=3345,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.139 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.139 job1: (groupid=0, jobs=1): err= 0: pid=1146099: Sun Jul 14 01:04:57 2024 00:19:08.139 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:19:08.139 slat (usec): min=2, max=18674, avg=108.62, stdev=844.78 00:19:08.139 clat (usec): min=3133, max=46530, avg=15092.24, stdev=6793.88 00:19:08.139 lat (usec): min=3139, max=46956, avg=15200.87, stdev=6828.34 00:19:08.139 clat percentiles (usec): 00:19:08.139 | 1.00th=[ 5211], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9896], 00:19:08.139 | 30.00th=[10683], 40.00th=[12125], 50.00th=[13566], 60.00th=[15008], 00:19:08.139 | 70.00th=[16450], 80.00th=[19792], 90.00th=[22676], 95.00th=[30016], 00:19:08.139 | 99.00th=[39584], 99.50th=[39584], 99.90th=[42730], 99.95th=[42730], 00:19:08.139 | 99.99th=[46400] 00:19:08.139 write: IOPS=4749, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1008msec); 0 zone resets 00:19:08.139 slat (usec): min=3, max=18723, avg=99.83, stdev=814.10 00:19:08.139 clat (usec): min=1639, max=42227, avg=13801.86, stdev=6714.53 00:19:08.139 lat (usec): min=1645, max=42232, avg=13901.68, stdev=6764.08 00:19:08.139 clat percentiles (usec): 00:19:08.139 | 1.00th=[ 5145], 5.00th=[ 6456], 10.00th=[ 7177], 20.00th=[ 8291], 00:19:08.139 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11863], 60.00th=[13173], 00:19:08.139 | 70.00th=[15270], 80.00th=[18744], 90.00th=[24511], 95.00th=[27657], 00:19:08.139 | 99.00th=[38536], 99.50th=[38536], 99.90th=[41157], 99.95th=[42206], 00:19:08.139 | 99.99th=[42206] 00:19:08.139 bw ( KiB/s): min=16432, max=20848, per=31.58%, avg=18640.00, stdev=3122.58, samples=2 00:19:08.139 iops : min= 4108, max= 5212, avg=4660.00, stdev=780.65, samples=2 00:19:08.139 lat (msec) : 2=0.18%, 4=0.07%, 10=26.48%, 20=56.49%, 50=16.78% 00:19:08.139 cpu : usr=3.57%, sys=6.36%, ctx=420, majf=0, minf=1 00:19:08.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:08.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.139 issued rwts: total=4096,4787,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.139 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.139 job2: (groupid=0, jobs=1): err= 0: pid=1146100: Sun Jul 14 01:04:57 2024 00:19:08.139 read: IOPS=3494, BW=13.7MiB/s (14.3MB/s)(13.8MiB/1011msec) 00:19:08.139 slat (usec): min=2, max=43734, avg=143.06, stdev=1292.33 00:19:08.139 clat (usec): min=4372, max=66007, avg=19022.98, stdev=11938.64 00:19:08.139 lat (usec): min=6026, max=66023, avg=19166.04, stdev=11987.73 00:19:08.139 clat percentiles (usec): 00:19:08.139 | 1.00th=[ 7177], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[11600], 00:19:08.139 | 30.00th=[12387], 40.00th=[13042], 50.00th=[14615], 60.00th=[16057], 00:19:08.139 | 70.00th=[19268], 80.00th=[25035], 90.00th=[32113], 95.00th=[52167], 00:19:08.139 | 99.00th=[61080], 99.50th=[61080], 99.90th=[65799], 99.95th=[65799], 00:19:08.139 | 99.99th=[65799] 00:19:08.139 write: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec); 0 zone resets 00:19:08.139 slat (usec): min=3, max=23950, avg=128.99, stdev=996.42 00:19:08.139 clat (usec): min=1260, max=61826, avg=17004.62, stdev=8439.86 00:19:08.139 lat (usec): min=2309, max=61846, avg=17133.61, stdev=8512.31 00:19:08.139 clat percentiles (usec): 00:19:08.139 | 1.00th=[ 6587], 5.00th=[ 6849], 10.00th=[ 8586], 20.00th=[11469], 00:19:08.139 | 30.00th=[12256], 40.00th=[12911], 50.00th=[13566], 60.00th=[16909], 00:19:08.139 | 70.00th=[19268], 80.00th=[22414], 90.00th=[29492], 95.00th=[38011], 00:19:08.139 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[50070], 00:19:08.139 | 99.99th=[61604] 00:19:08.139 bw ( KiB/s): min=13080, max=15592, per=24.29%, avg=14336.00, stdev=1776.25, samples=2 00:19:08.139 iops : min= 3270, max= 3898, avg=3584.00, stdev=444.06, samples=2 00:19:08.139 lat (msec) : 2=0.01%, 4=0.13%, 10=10.50%, 20=62.69%, 50=23.66% 00:19:08.139 lat (msec) : 100=3.01% 00:19:08.139 cpu : usr=2.87%, sys=4.55%, ctx=288, majf=0, minf=1 00:19:08.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:08.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.139 issued rwts: total=3533,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.139 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.139 job3: (groupid=0, jobs=1): err= 0: pid=1146101: Sun Jul 14 01:04:57 2024 00:19:08.139 read: IOPS=3082, BW=12.0MiB/s (12.6MB/s)(12.2MiB/1012msec) 00:19:08.139 slat (usec): min=3, max=26516, avg=150.36, stdev=1027.76 00:19:08.139 clat (usec): min=8022, max=71695, avg=17995.32, stdev=11431.70 00:19:08.139 lat (usec): min=8044, max=71733, avg=18145.68, stdev=11540.71 00:19:08.139 clat percentiles (usec): 00:19:08.139 | 1.00th=[ 8979], 5.00th=[10159], 10.00th=[10683], 20.00th=[11338], 00:19:08.139 | 30.00th=[12256], 40.00th=[12911], 50.00th=[13698], 60.00th=[14877], 00:19:08.139 | 70.00th=[16909], 80.00th=[20317], 90.00th=[34341], 95.00th=[49021], 00:19:08.139 | 99.00th=[60031], 99.50th=[64226], 99.90th=[64226], 99.95th=[68682], 00:19:08.139 | 99.99th=[71828] 00:19:08.139 write: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec); 0 zone resets 00:19:08.139 slat (usec): min=4, max=30353, avg=136.89, stdev=760.02 00:19:08.139 clat (usec): min=3230, max=61125, avg=20090.81, stdev=12983.91 00:19:08.139 lat (usec): min=3251, max=61166, avg=20227.70, stdev=13051.19 00:19:08.139 clat percentiles (usec): 00:19:08.139 | 1.00th=[ 6194], 5.00th=[ 8717], 10.00th=[10814], 20.00th=[12125], 00:19:08.139 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[15533], 00:19:08.139 | 70.00th=[21627], 80.00th=[26346], 90.00th=[42730], 95.00th=[51643], 00:19:08.139 | 99.00th=[58459], 99.50th=[58983], 99.90th=[61080], 99.95th=[61080], 00:19:08.139 | 99.99th=[61080] 00:19:08.139 bw ( KiB/s): min= 8064, max=19960, per=23.74%, avg=14012.00, stdev=8411.74, samples=2 00:19:08.139 iops : min= 2016, max= 4990, avg=3503.00, stdev=2102.94, samples=2 00:19:08.139 lat (msec) : 4=0.12%, 10=4.98%, 20=67.10%, 50=22.26%, 100=5.53% 00:19:08.139 cpu : usr=3.96%, sys=8.70%, ctx=433, majf=0, minf=1 00:19:08.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:08.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:08.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:08.139 issued rwts: total=3119,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:08.139 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:08.139 00:19:08.139 Run status group 0 (all jobs): 00:19:08.139 READ: bw=52.3MiB/s (54.8MB/s), 12.0MiB/s-15.9MiB/s (12.6MB/s-16.6MB/s), io=55.1MiB (57.7MB), run=1008-1053msec 00:19:08.139 WRITE: bw=57.6MiB/s (60.4MB/s), 13.3MiB/s-18.6MiB/s (13.9MB/s-19.5MB/s), io=60.7MiB (63.6MB), run=1008-1053msec 00:19:08.139 00:19:08.139 Disk stats (read/write): 00:19:08.139 nvme0n1: ios=2610/2895, merge=0/0, ticks=40856/55719, in_queue=96575, util=87.27% 00:19:08.139 nvme0n2: ios=3633/4103, merge=0/0, ticks=38663/45927, in_queue=84590, util=89.74% 00:19:08.139 nvme0n3: ios=2992/3072, merge=0/0, ticks=28669/27905, in_queue=56574, util=92.38% 00:19:08.139 nvme0n4: ios=3122/3135, merge=0/0, ticks=27919/25819, in_queue=53738, util=96.11% 00:19:08.139 01:04:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:08.139 01:04:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1146241 00:19:08.139 01:04:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:08.139 01:04:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:08.139 [global] 00:19:08.139 thread=1 00:19:08.139 invalidate=1 00:19:08.139 rw=read 00:19:08.139 time_based=1 00:19:08.139 runtime=10 00:19:08.139 ioengine=libaio 00:19:08.139 direct=1 00:19:08.139 bs=4096 00:19:08.139 iodepth=1 00:19:08.139 norandommap=1 00:19:08.139 numjobs=1 00:19:08.139 00:19:08.139 [job0] 00:19:08.139 filename=/dev/nvme0n1 00:19:08.139 [job1] 00:19:08.139 filename=/dev/nvme0n2 00:19:08.139 [job2] 00:19:08.139 filename=/dev/nvme0n3 00:19:08.139 [job3] 00:19:08.139 filename=/dev/nvme0n4 00:19:08.140 Could not set queue depth (nvme0n1) 00:19:08.140 Could not set queue depth (nvme0n2) 00:19:08.140 Could not set queue depth (nvme0n3) 00:19:08.140 Could not set queue depth (nvme0n4) 00:19:08.398 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:08.398 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:08.398 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:08.398 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:08.398 fio-3.35 00:19:08.398 Starting 4 threads 00:19:11.683 01:05:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:11.683 01:05:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:11.683 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=25477120, buflen=4096 00:19:11.683 fio: pid=1146341, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:11.683 01:05:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:11.683 01:05:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:11.683 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=14176256, buflen=4096 00:19:11.683 fio: pid=1146340, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:11.940 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=10874880, buflen=4096 00:19:11.940 fio: pid=1146338, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:11.940 01:05:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:11.940 01:05:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:12.200 01:05:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:12.200 01:05:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:12.200 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=3981312, buflen=4096 00:19:12.200 fio: pid=1146339, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:12.200 00:19:12.201 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1146338: Sun Jul 14 01:05:01 2024 00:19:12.201 read: IOPS=769, BW=3077KiB/s (3151kB/s)(10.4MiB/3451msec) 00:19:12.201 slat (usec): min=4, max=14369, avg=29.02, stdev=427.50 00:19:12.201 clat (usec): min=348, max=42746, avg=1259.03, stdev=5717.33 00:19:12.201 lat (usec): min=354, max=42758, avg=1288.05, stdev=5731.53 00:19:12.201 clat percentiles (usec): 00:19:12.201 | 1.00th=[ 388], 5.00th=[ 396], 10.00th=[ 400], 20.00th=[ 408], 00:19:12.201 | 30.00th=[ 420], 40.00th=[ 433], 50.00th=[ 445], 60.00th=[ 457], 00:19:12.201 | 70.00th=[ 469], 80.00th=[ 482], 90.00th=[ 502], 95.00th=[ 529], 00:19:12.201 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:19:12.201 | 99.99th=[42730] 00:19:12.201 bw ( KiB/s): min= 96, max= 9048, per=17.12%, avg=2441.33, stdev=3813.13, samples=6 00:19:12.201 iops : min= 24, max= 2262, avg=610.33, stdev=953.28, samples=6 00:19:12.201 lat (usec) : 500=89.08%, 750=8.81%, 1000=0.04% 00:19:12.201 lat (msec) : 2=0.04%, 20=0.04%, 50=1.96% 00:19:12.201 cpu : usr=0.58%, sys=1.25%, ctx=2661, majf=0, minf=1 00:19:12.201 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:12.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.201 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.201 issued rwts: total=2656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.201 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:12.201 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1146339: Sun Jul 14 01:05:01 2024 00:19:12.201 read: IOPS=260, BW=1042KiB/s (1067kB/s)(3888KiB/3733msec) 00:19:12.201 slat (usec): min=7, max=20688, avg=47.16, stdev=853.97 00:19:12.201 clat (usec): min=409, max=64165, avg=3767.62, stdev=11178.96 00:19:12.201 lat (usec): min=417, max=64195, avg=3814.79, stdev=11283.14 00:19:12.201 clat percentiles (usec): 00:19:12.201 | 1.00th=[ 416], 5.00th=[ 449], 10.00th=[ 474], 20.00th=[ 482], 00:19:12.201 | 30.00th=[ 486], 40.00th=[ 490], 50.00th=[ 494], 60.00th=[ 506], 00:19:12.201 | 70.00th=[ 570], 80.00th=[ 578], 90.00th=[ 594], 95.00th=[41681], 00:19:12.201 | 99.00th=[42206], 99.50th=[42206], 99.90th=[64226], 99.95th=[64226], 00:19:12.201 | 99.99th=[64226] 00:19:12.201 bw ( KiB/s): min= 93, max= 5168, per=7.74%, avg=1104.71, stdev=1938.17, samples=7 00:19:12.201 iops : min= 23, max= 1292, avg=276.14, stdev=484.56, samples=7 00:19:12.201 lat (usec) : 500=57.66%, 750=34.33% 00:19:12.201 lat (msec) : 10=0.10%, 50=7.71%, 100=0.10% 00:19:12.201 cpu : usr=0.13%, sys=0.32%, ctx=977, majf=0, minf=1 00:19:12.201 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:12.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.201 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.201 issued rwts: total=973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.201 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:12.201 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1146340: Sun Jul 14 01:05:01 2024 00:19:12.201 read: IOPS=1073, BW=4293KiB/s (4396kB/s)(13.5MiB/3225msec) 00:19:12.201 slat (nsec): min=5544, max=47993, avg=11279.78, stdev=5489.49 00:19:12.201 clat (usec): min=314, max=45411, avg=911.11, stdev=4476.42 00:19:12.201 lat (usec): min=322, max=45417, avg=922.38, stdev=4477.24 00:19:12.201 clat percentiles (usec): 00:19:12.201 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 343], 00:19:12.201 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 371], 60.00th=[ 412], 00:19:12.201 | 70.00th=[ 482], 80.00th=[ 494], 90.00th=[ 553], 95.00th=[ 570], 00:19:12.201 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:12.201 | 99.99th=[45351] 00:19:12.201 bw ( KiB/s): min= 928, max=10312, per=32.31%, avg=4608.00, stdev=3913.41, samples=6 00:19:12.201 iops : min= 232, max= 2578, avg=1152.00, stdev=978.35, samples=6 00:19:12.201 lat (usec) : 500=85.36%, 750=13.32%, 1000=0.03% 00:19:12.201 lat (msec) : 2=0.03%, 20=0.03%, 50=1.21% 00:19:12.201 cpu : usr=0.87%, sys=1.77%, ctx=3464, majf=0, minf=1 00:19:12.201 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:12.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.201 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.201 issued rwts: total=3462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.201 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:12.201 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1146341: Sun Jul 14 01:05:01 2024 00:19:12.201 read: IOPS=2123, BW=8494KiB/s (8698kB/s)(24.3MiB/2929msec) 00:19:12.201 slat (nsec): min=5551, max=59097, avg=12185.86, stdev=6871.01 00:19:12.201 clat (usec): min=315, max=41003, avg=451.77, stdev=890.87 00:19:12.201 lat (usec): min=322, max=41018, avg=463.96, stdev=891.11 00:19:12.201 clat percentiles (usec): 00:19:12.201 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 359], 00:19:12.201 | 30.00th=[ 379], 40.00th=[ 400], 50.00th=[ 437], 60.00th=[ 453], 00:19:12.201 | 70.00th=[ 474], 80.00th=[ 490], 90.00th=[ 523], 95.00th=[ 562], 00:19:12.201 | 99.00th=[ 586], 99.50th=[ 603], 99.90th=[ 938], 99.95th=[ 2442], 00:19:12.201 | 99.99th=[41157] 00:19:12.201 bw ( KiB/s): min= 5816, max= 9896, per=58.72%, avg=8374.40, stdev=1564.88, samples=5 00:19:12.201 iops : min= 1454, max= 2474, avg=2093.60, stdev=391.22, samples=5 00:19:12.201 lat (usec) : 500=86.29%, 750=13.53%, 1000=0.06% 00:19:12.201 lat (msec) : 2=0.03%, 4=0.02%, 50=0.05% 00:19:12.201 cpu : usr=1.98%, sys=3.79%, ctx=6221, majf=0, minf=1 00:19:12.201 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:12.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.201 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.201 issued rwts: total=6221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.201 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:12.201 00:19:12.201 Run status group 0 (all jobs): 00:19:12.201 READ: bw=13.9MiB/s (14.6MB/s), 1042KiB/s-8494KiB/s (1067kB/s-8698kB/s), io=52.0MiB (54.5MB), run=2929-3733msec 00:19:12.201 00:19:12.201 Disk stats (read/write): 00:19:12.201 nvme0n1: ios=2479/0, merge=0/0, ticks=3225/0, in_queue=3225, util=94.74% 00:19:12.201 nvme0n2: ios=1009/0, merge=0/0, ticks=4536/0, in_queue=4536, util=98.23% 00:19:12.201 nvme0n3: ios=3509/0, merge=0/0, ticks=4094/0, in_queue=4094, util=99.10% 00:19:12.201 nvme0n4: ios=6112/0, merge=0/0, ticks=2563/0, in_queue=2563, util=96.71% 00:19:12.459 01:05:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:12.459 01:05:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:12.716 01:05:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:12.716 01:05:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:12.974 01:05:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:12.974 01:05:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:13.231 01:05:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:13.231 01:05:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:13.490 01:05:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:13.490 01:05:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1146241 00:19:13.490 01:05:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:13.490 01:05:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:13.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:13.748 01:05:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:13.748 01:05:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:19:13.748 01:05:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:13.748 01:05:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:13.748 01:05:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:13.748 01:05:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:13.748 01:05:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:19:13.748 01:05:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:13.748 01:05:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:13.748 nvmf hotplug test: fio failed as expected 00:19:13.748 01:05:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:14.012 rmmod nvme_tcp 00:19:14.012 rmmod nvme_fabrics 00:19:14.012 rmmod nvme_keyring 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1144254 ']' 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1144254 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1144254 ']' 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1144254 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1144254 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1144254' 00:19:14.012 killing process with pid 1144254 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1144254 00:19:14.012 01:05:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1144254 00:19:14.278 01:05:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:14.278 01:05:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:14.278 01:05:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:14.278 01:05:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:14.278 01:05:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:14.278 01:05:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.278 01:05:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.278 01:05:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.184 01:05:05 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:16.184 00:19:16.184 real 0m23.280s 00:19:16.184 user 1m20.523s 00:19:16.184 sys 0m6.776s 00:19:16.184 01:05:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:16.184 01:05:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.184 ************************************ 00:19:16.184 END TEST nvmf_fio_target 00:19:16.184 ************************************ 00:19:16.442 01:05:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:16.442 01:05:05 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:16.442 01:05:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:16.442 01:05:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:16.442 01:05:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:16.442 ************************************ 00:19:16.442 START TEST nvmf_bdevio 00:19:16.442 ************************************ 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:16.442 * Looking for test storage... 00:19:16.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:16.442 01:05:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:18.346 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:18.346 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:18.346 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:18.346 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:18.346 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:18.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:18.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:19:18.607 00:19:18.607 --- 10.0.0.2 ping statistics --- 00:19:18.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.607 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:18.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:18.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:19:18.607 00:19:18.607 --- 10.0.0.1 ping statistics --- 00:19:18.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.607 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1148950 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1148950 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1148950 ']' 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:18.607 01:05:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:18.607 [2024-07-14 01:05:07.875424] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:18.607 [2024-07-14 01:05:07.875510] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.607 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.607 [2024-07-14 01:05:07.942222] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:18.866 [2024-07-14 01:05:08.043686] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.866 [2024-07-14 01:05:08.043754] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.866 [2024-07-14 01:05:08.043772] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.866 [2024-07-14 01:05:08.043786] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.866 [2024-07-14 01:05:08.043798] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.866 [2024-07-14 01:05:08.043900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:18.866 [2024-07-14 01:05:08.043957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:18.866 [2024-07-14 01:05:08.044010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:18.866 [2024-07-14 01:05:08.044013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:18.866 [2024-07-14 01:05:08.206825] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:18.866 Malloc0 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:18.866 [2024-07-14 01:05:08.258989] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:18.866 { 00:19:18.866 "params": { 00:19:18.866 "name": "Nvme$subsystem", 00:19:18.866 "trtype": "$TEST_TRANSPORT", 00:19:18.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:18.866 "adrfam": "ipv4", 00:19:18.866 "trsvcid": "$NVMF_PORT", 00:19:18.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:18.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:18.866 "hdgst": ${hdgst:-false}, 00:19:18.866 "ddgst": ${ddgst:-false} 00:19:18.866 }, 00:19:18.866 "method": "bdev_nvme_attach_controller" 00:19:18.866 } 00:19:18.866 EOF 00:19:18.866 )") 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:18.866 01:05:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:18.866 "params": { 00:19:18.866 "name": "Nvme1", 00:19:18.866 "trtype": "tcp", 00:19:18.866 "traddr": "10.0.0.2", 00:19:18.866 "adrfam": "ipv4", 00:19:18.866 "trsvcid": "4420", 00:19:18.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:18.866 "hdgst": false, 00:19:18.866 "ddgst": false 00:19:18.866 }, 00:19:18.866 "method": "bdev_nvme_attach_controller" 00:19:18.866 }' 00:19:19.126 [2024-07-14 01:05:08.309292] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:19:19.126 [2024-07-14 01:05:08.309366] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149013 ] 00:19:19.126 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.126 [2024-07-14 01:05:08.373022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:19.126 [2024-07-14 01:05:08.465518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.126 [2024-07-14 01:05:08.465573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.126 [2024-07-14 01:05:08.465576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.389 I/O targets: 00:19:19.389 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:19.389 00:19:19.389 00:19:19.389 CUnit - A unit testing framework for C - Version 2.1-3 00:19:19.389 http://cunit.sourceforge.net/ 00:19:19.389 00:19:19.389 00:19:19.389 Suite: bdevio tests on: Nvme1n1 00:19:19.389 Test: blockdev write read block ...passed 00:19:19.389 Test: blockdev write zeroes read block ...passed 00:19:19.389 Test: blockdev write zeroes read no split ...passed 00:19:19.682 Test: blockdev write zeroes read split ...passed 00:19:19.682 Test: blockdev write zeroes read split partial ...passed 00:19:19.682 Test: blockdev reset ...[2024-07-14 01:05:08.907828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:19.682 [2024-07-14 01:05:08.907945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177fa60 (9): Bad file descriptor 00:19:19.682 [2024-07-14 01:05:08.961216] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:19.682 passed 00:19:19.682 Test: blockdev write read 8 blocks ...passed 00:19:19.682 Test: blockdev write read size > 128k ...passed 00:19:19.682 Test: blockdev write read invalid size ...passed 00:19:19.682 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:19.682 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:19.682 Test: blockdev write read max offset ...passed 00:19:19.942 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:19.942 Test: blockdev writev readv 8 blocks ...passed 00:19:19.942 Test: blockdev writev readv 30 x 1block ...passed 00:19:19.942 Test: blockdev writev readv block ...passed 00:19:19.942 Test: blockdev writev readv size > 128k ...passed 00:19:19.942 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:19.942 Test: blockdev comparev and writev ...[2024-07-14 01:05:09.137383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:19.942 [2024-07-14 01:05:09.137419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.942 [2024-07-14 01:05:09.137452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:19.942 [2024-07-14 01:05:09.137470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:19.942 [2024-07-14 01:05:09.137817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:19.942 [2024-07-14 01:05:09.137843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:19.942 [2024-07-14 01:05:09.137880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:19.942 [2024-07-14 01:05:09.137899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:19.942 [2024-07-14 01:05:09.138274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:19.942 [2024-07-14 01:05:09.138315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:19.942 [2024-07-14 01:05:09.138338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:19.942 [2024-07-14 01:05:09.138365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:19.942 [2024-07-14 01:05:09.138754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:19.942 [2024-07-14 01:05:09.138779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:19.942 [2024-07-14 01:05:09.138800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:19.942 [2024-07-14 01:05:09.138817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:19.942 passed 00:19:19.942 Test: blockdev nvme passthru rw ...passed 00:19:19.942 Test: blockdev nvme passthru vendor specific ...[2024-07-14 01:05:09.222225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:19.942 [2024-07-14 01:05:09.222252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:19.942 [2024-07-14 01:05:09.222462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:19.942 [2024-07-14 01:05:09.222487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:19.942 [2024-07-14 01:05:09.222682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:19.942 [2024-07-14 01:05:09.222706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:19.942 [2024-07-14 01:05:09.222915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:19.942 [2024-07-14 01:05:09.222939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:19.942 passed 00:19:19.942 Test: blockdev nvme admin passthru ...passed 00:19:19.942 Test: blockdev copy ...passed 00:19:19.942 00:19:19.942 Run Summary: Type Total Ran Passed Failed Inactive 00:19:19.942 suites 1 1 n/a 0 0 00:19:19.942 tests 23 23 23 0 0 00:19:19.942 asserts 152 152 152 0 n/a 00:19:19.942 00:19:19.942 Elapsed time = 1.173 seconds 00:19:20.202 01:05:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:20.202 01:05:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.202 01:05:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:20.202 01:05:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.202 01:05:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:20.202 01:05:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:20.202 01:05:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:20.202 01:05:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:20.203 rmmod nvme_tcp 00:19:20.203 rmmod nvme_fabrics 00:19:20.203 rmmod nvme_keyring 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1148950 ']' 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1148950 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1148950 ']' 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1148950 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1148950 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1148950' 00:19:20.203 killing process with pid 1148950 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1148950 00:19:20.203 01:05:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1148950 00:19:20.462 01:05:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:20.462 01:05:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:20.462 01:05:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:20.462 01:05:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:20.462 01:05:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:20.462 01:05:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.462 01:05:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.462 01:05:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.001 01:05:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:23.001 00:19:23.001 real 0m6.238s 00:19:23.001 user 0m9.946s 00:19:23.001 sys 0m2.066s 00:19:23.001 01:05:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:23.001 01:05:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:23.001 ************************************ 00:19:23.001 END TEST nvmf_bdevio 00:19:23.001 ************************************ 00:19:23.001 01:05:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:23.001 01:05:11 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:23.001 01:05:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:23.001 01:05:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:23.001 01:05:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:23.001 ************************************ 00:19:23.001 START TEST nvmf_auth_target 00:19:23.001 ************************************ 00:19:23.001 01:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:23.001 * Looking for test storage... 00:19:23.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:23.001 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:23.001 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:23.001 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.001 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.001 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.001 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.001 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.001 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.001 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.001 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.001 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.002 01:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.002 01:05:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:23.002 01:05:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:23.002 01:05:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:23.002 01:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:24.908 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:24.908 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:24.908 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:24.908 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:24.908 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:24.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:19:24.909 00:19:24.909 --- 10.0.0.2 ping statistics --- 00:19:24.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.909 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:19:24.909 00:19:24.909 --- 10.0.0.1 ping statistics --- 00:19:24.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.909 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1151129 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1151129 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1151129 ']' 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:24.909 01:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1151182 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=aed7f585b017a1a18f10152d8451643e7bbab74bf5bbbcf3 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.bZ3 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key aed7f585b017a1a18f10152d8451643e7bbab74bf5bbbcf3 0 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 aed7f585b017a1a18f10152d8451643e7bbab74bf5bbbcf3 0 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=aed7f585b017a1a18f10152d8451643e7bbab74bf5bbbcf3 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:24.909 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.bZ3 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.bZ3 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.bZ3 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=41a5d7255f9941813cfb22ec1c90e701fb08e93ed0776991e4e63a64210ba483 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.nud 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 41a5d7255f9941813cfb22ec1c90e701fb08e93ed0776991e4e63a64210ba483 3 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 41a5d7255f9941813cfb22ec1c90e701fb08e93ed0776991e4e63a64210ba483 3 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=41a5d7255f9941813cfb22ec1c90e701fb08e93ed0776991e4e63a64210ba483 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.nud 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.nud 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.nud 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=63e9caf1ffd818dffd96cec899923896 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Btw 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 63e9caf1ffd818dffd96cec899923896 1 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 63e9caf1ffd818dffd96cec899923896 1 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=63e9caf1ffd818dffd96cec899923896 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Btw 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Btw 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Btw 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=85110b66296a84b18d8045f453c59b9575dfca40de2bbe4e 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.3Ex 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 85110b66296a84b18d8045f453c59b9575dfca40de2bbe4e 2 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 85110b66296a84b18d8045f453c59b9575dfca40de2bbe4e 2 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=85110b66296a84b18d8045f453c59b9575dfca40de2bbe4e 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.3Ex 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.3Ex 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.3Ex 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a4b0441781fbd0a07ecebd263fe2cecb246806826669630e 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.y2g 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a4b0441781fbd0a07ecebd263fe2cecb246806826669630e 2 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a4b0441781fbd0a07ecebd263fe2cecb246806826669630e 2 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a4b0441781fbd0a07ecebd263fe2cecb246806826669630e 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.y2g 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.y2g 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.y2g 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dfd442f1fffcaf1e1dd6a19a53517417 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.SO8 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dfd442f1fffcaf1e1dd6a19a53517417 1 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dfd442f1fffcaf1e1dd6a19a53517417 1 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:25.168 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dfd442f1fffcaf1e1dd6a19a53517417 00:19:25.169 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:25.169 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.SO8 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.SO8 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.SO8 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8ca76b9c0fd6a0e7f7636de1d610c13d4537d60cc58c3c94a09991d906166e35 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2BL 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8ca76b9c0fd6a0e7f7636de1d610c13d4537d60cc58c3c94a09991d906166e35 3 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8ca76b9c0fd6a0e7f7636de1d610c13d4537d60cc58c3c94a09991d906166e35 3 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8ca76b9c0fd6a0e7f7636de1d610c13d4537d60cc58c3c94a09991d906166e35 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2BL 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2BL 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.2BL 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1151129 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1151129 ']' 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.427 01:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.684 01:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.684 01:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:25.684 01:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1151182 /var/tmp/host.sock 00:19:25.684 01:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1151182 ']' 00:19:25.684 01:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:19:25.684 01:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.684 01:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:25.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:25.684 01:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.684 01:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.943 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.943 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:25.943 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:25.943 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.943 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.943 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.943 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:25.943 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.bZ3 00:19:25.943 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.943 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.943 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.943 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.bZ3 00:19:25.943 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.bZ3 00:19:26.202 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.nud ]] 00:19:26.202 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nud 00:19:26.202 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.202 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.202 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.202 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nud 00:19:26.202 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nud 00:19:26.461 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:26.461 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Btw 00:19:26.461 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.461 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.461 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.461 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Btw 00:19:26.461 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Btw 00:19:26.720 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.3Ex ]] 00:19:26.720 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3Ex 00:19:26.720 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.720 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.720 01:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.720 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3Ex 00:19:26.720 01:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3Ex 00:19:26.978 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:26.978 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.y2g 00:19:26.978 01:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.978 01:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.978 01:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.978 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.y2g 00:19:26.978 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.y2g 00:19:27.235 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.SO8 ]] 00:19:27.235 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SO8 00:19:27.235 01:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.235 01:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.235 01:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.235 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SO8 00:19:27.235 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.SO8 00:19:27.492 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:27.492 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2BL 00:19:27.492 01:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.492 01:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.492 01:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.492 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.2BL 00:19:27.493 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.2BL 00:19:27.750 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:27.751 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:27.751 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.751 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.751 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:27.751 01:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:28.008 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:28.008 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.008 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.008 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:28.008 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:28.008 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.008 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.008 01:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.008 01:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.008 01:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.008 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.008 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.265 00:19:28.265 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.265 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.265 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.523 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.523 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.523 01:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.523 01:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.523 01:05:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.523 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.523 { 00:19:28.523 "cntlid": 1, 00:19:28.523 "qid": 0, 00:19:28.523 "state": "enabled", 00:19:28.523 "thread": "nvmf_tgt_poll_group_000", 00:19:28.523 "listen_address": { 00:19:28.523 "trtype": "TCP", 00:19:28.523 "adrfam": "IPv4", 00:19:28.523 "traddr": "10.0.0.2", 00:19:28.523 "trsvcid": "4420" 00:19:28.523 }, 00:19:28.523 "peer_address": { 00:19:28.523 "trtype": "TCP", 00:19:28.523 "adrfam": "IPv4", 00:19:28.523 "traddr": "10.0.0.1", 00:19:28.523 "trsvcid": "42234" 00:19:28.523 }, 00:19:28.523 "auth": { 00:19:28.523 "state": "completed", 00:19:28.523 "digest": "sha256", 00:19:28.523 "dhgroup": "null" 00:19:28.523 } 00:19:28.523 } 00:19:28.523 ]' 00:19:28.523 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.523 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.523 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.523 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:28.523 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.523 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.523 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.523 01:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.806 01:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:19:29.740 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.740 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:29.740 01:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.740 01:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.740 01:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.740 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.740 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:29.740 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:29.997 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:29.997 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.997 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:29.997 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:29.997 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:29.997 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.997 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.997 01:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.997 01:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.997 01:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.997 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.998 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.565 00:19:30.565 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.565 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.565 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.565 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.565 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.565 01:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.565 01:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.565 01:05:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.565 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.565 { 00:19:30.565 "cntlid": 3, 00:19:30.565 "qid": 0, 00:19:30.565 "state": "enabled", 00:19:30.565 "thread": "nvmf_tgt_poll_group_000", 00:19:30.565 "listen_address": { 00:19:30.565 "trtype": "TCP", 00:19:30.565 "adrfam": "IPv4", 00:19:30.565 "traddr": "10.0.0.2", 00:19:30.565 "trsvcid": "4420" 00:19:30.565 }, 00:19:30.565 "peer_address": { 00:19:30.565 "trtype": "TCP", 00:19:30.565 "adrfam": "IPv4", 00:19:30.565 "traddr": "10.0.0.1", 00:19:30.565 "trsvcid": "58640" 00:19:30.565 }, 00:19:30.565 "auth": { 00:19:30.565 "state": "completed", 00:19:30.565 "digest": "sha256", 00:19:30.565 "dhgroup": "null" 00:19:30.565 } 00:19:30.565 } 00:19:30.565 ]' 00:19:30.565 01:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.823 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:30.823 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.823 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:30.823 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.823 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.823 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.823 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.081 01:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjNlOWNhZjFmZmQ4MThkZmZkOTZjZWM4OTk5MjM4OTbGe/9N: --dhchap-ctrl-secret DHHC-1:02:ODUxMTBiNjYyOTZhODRiMThkODA0NWY0NTNjNTliOTU3NWRmY2E0MGRlMmJiZTRl8qaI/g==: 00:19:32.017 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.017 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.017 01:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.017 01:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.017 01:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.017 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.017 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:32.017 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:32.274 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:32.274 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.274 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:32.274 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:32.274 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:32.274 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.274 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.274 01:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.274 01:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.274 01:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.274 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.274 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.532 00:19:32.532 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.532 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.532 01:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.789 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.789 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.789 01:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.789 01:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.789 01:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.789 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.789 { 00:19:32.789 "cntlid": 5, 00:19:32.789 "qid": 0, 00:19:32.789 "state": "enabled", 00:19:32.789 "thread": "nvmf_tgt_poll_group_000", 00:19:32.789 "listen_address": { 00:19:32.789 "trtype": "TCP", 00:19:32.789 "adrfam": "IPv4", 00:19:32.789 "traddr": "10.0.0.2", 00:19:32.789 "trsvcid": "4420" 00:19:32.789 }, 00:19:32.789 "peer_address": { 00:19:32.789 "trtype": "TCP", 00:19:32.789 "adrfam": "IPv4", 00:19:32.789 "traddr": "10.0.0.1", 00:19:32.789 "trsvcid": "58676" 00:19:32.789 }, 00:19:32.789 "auth": { 00:19:32.789 "state": "completed", 00:19:32.789 "digest": "sha256", 00:19:32.789 "dhgroup": "null" 00:19:32.789 } 00:19:32.789 } 00:19:32.789 ]' 00:19:32.789 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.789 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:32.789 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.047 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:33.047 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.047 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.047 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.047 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.304 01:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTRiMDQ0MTc4MWZiZDBhMDdlY2ViZDI2M2ZlMmNlY2IyNDY4MDY4MjY2Njk2MzBl5eIxqg==: --dhchap-ctrl-secret DHHC-1:01:ZGZkNDQyZjFmZmZjYWYxZTFkZDZhMTlhNTM1MTc0MTcXN4iq: 00:19:34.239 01:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.239 01:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.239 01:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.239 01:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.239 01:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.239 01:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.239 01:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:34.239 01:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:34.497 01:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:34.497 01:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.497 01:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:34.497 01:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:34.497 01:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:34.497 01:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.497 01:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:34.497 01:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.497 01:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.497 01:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.497 01:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.497 01:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.754 00:19:34.754 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.754 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.754 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.012 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.012 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.012 01:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.012 01:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.012 01:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.012 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.012 { 00:19:35.012 "cntlid": 7, 00:19:35.012 "qid": 0, 00:19:35.012 "state": "enabled", 00:19:35.012 "thread": "nvmf_tgt_poll_group_000", 00:19:35.012 "listen_address": { 00:19:35.012 "trtype": "TCP", 00:19:35.012 "adrfam": "IPv4", 00:19:35.012 "traddr": "10.0.0.2", 00:19:35.012 "trsvcid": "4420" 00:19:35.012 }, 00:19:35.012 "peer_address": { 00:19:35.012 "trtype": "TCP", 00:19:35.012 "adrfam": "IPv4", 00:19:35.012 "traddr": "10.0.0.1", 00:19:35.012 "trsvcid": "58704" 00:19:35.012 }, 00:19:35.012 "auth": { 00:19:35.012 "state": "completed", 00:19:35.012 "digest": "sha256", 00:19:35.012 "dhgroup": "null" 00:19:35.012 } 00:19:35.012 } 00:19:35.012 ]' 00:19:35.012 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.269 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.269 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.269 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:35.269 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.269 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.269 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.269 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.540 01:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:19:36.491 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.491 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.491 01:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.491 01:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.491 01:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.491 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.491 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.491 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.491 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:36.749 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:36.749 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.749 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:36.749 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:36.749 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:36.749 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.749 01:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.749 01:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.749 01:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.749 01:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.749 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.749 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.007 00:19:37.007 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.007 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.007 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.265 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.265 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.265 01:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.265 01:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.265 01:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.265 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.265 { 00:19:37.265 "cntlid": 9, 00:19:37.265 "qid": 0, 00:19:37.265 "state": "enabled", 00:19:37.265 "thread": "nvmf_tgt_poll_group_000", 00:19:37.265 "listen_address": { 00:19:37.265 "trtype": "TCP", 00:19:37.265 "adrfam": "IPv4", 00:19:37.265 "traddr": "10.0.0.2", 00:19:37.265 "trsvcid": "4420" 00:19:37.265 }, 00:19:37.265 "peer_address": { 00:19:37.265 "trtype": "TCP", 00:19:37.265 "adrfam": "IPv4", 00:19:37.265 "traddr": "10.0.0.1", 00:19:37.265 "trsvcid": "58728" 00:19:37.265 }, 00:19:37.265 "auth": { 00:19:37.265 "state": "completed", 00:19:37.265 "digest": "sha256", 00:19:37.265 "dhgroup": "ffdhe2048" 00:19:37.265 } 00:19:37.265 } 00:19:37.265 ]' 00:19:37.265 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.265 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:37.265 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.265 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:37.265 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.524 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.524 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.524 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.524 01:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:19:38.460 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.718 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:38.718 01:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.718 01:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.718 01:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.718 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.718 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:38.718 01:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:38.718 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:38.718 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.718 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:38.718 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:38.718 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:38.718 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.718 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.718 01:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.718 01:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.977 01:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.977 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.977 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.235 00:19:39.235 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.235 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.235 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.493 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.493 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.493 01:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.493 01:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.493 01:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.493 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.493 { 00:19:39.493 "cntlid": 11, 00:19:39.493 "qid": 0, 00:19:39.493 "state": "enabled", 00:19:39.493 "thread": "nvmf_tgt_poll_group_000", 00:19:39.493 "listen_address": { 00:19:39.493 "trtype": "TCP", 00:19:39.493 "adrfam": "IPv4", 00:19:39.493 "traddr": "10.0.0.2", 00:19:39.493 "trsvcid": "4420" 00:19:39.493 }, 00:19:39.493 "peer_address": { 00:19:39.493 "trtype": "TCP", 00:19:39.493 "adrfam": "IPv4", 00:19:39.493 "traddr": "10.0.0.1", 00:19:39.493 "trsvcid": "46978" 00:19:39.493 }, 00:19:39.493 "auth": { 00:19:39.493 "state": "completed", 00:19:39.493 "digest": "sha256", 00:19:39.493 "dhgroup": "ffdhe2048" 00:19:39.493 } 00:19:39.493 } 00:19:39.493 ]' 00:19:39.493 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.493 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.493 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.493 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:39.493 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.493 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.493 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.493 01:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.752 01:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjNlOWNhZjFmZmQ4MThkZmZkOTZjZWM4OTk5MjM4OTbGe/9N: --dhchap-ctrl-secret DHHC-1:02:ODUxMTBiNjYyOTZhODRiMThkODA0NWY0NTNjNTliOTU3NWRmY2E0MGRlMmJiZTRl8qaI/g==: 00:19:40.688 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.688 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.688 01:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.688 01:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.688 01:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.688 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.688 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:40.688 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:40.946 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:40.946 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.946 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.946 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:40.946 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:40.946 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.946 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.946 01:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.946 01:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.946 01:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.946 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.946 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.514 00:19:41.514 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.514 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.514 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.773 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.773 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.773 01:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.773 01:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.773 01:05:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.773 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.773 { 00:19:41.773 "cntlid": 13, 00:19:41.773 "qid": 0, 00:19:41.773 "state": "enabled", 00:19:41.773 "thread": "nvmf_tgt_poll_group_000", 00:19:41.773 "listen_address": { 00:19:41.773 "trtype": "TCP", 00:19:41.773 "adrfam": "IPv4", 00:19:41.773 "traddr": "10.0.0.2", 00:19:41.773 "trsvcid": "4420" 00:19:41.773 }, 00:19:41.773 "peer_address": { 00:19:41.773 "trtype": "TCP", 00:19:41.773 "adrfam": "IPv4", 00:19:41.773 "traddr": "10.0.0.1", 00:19:41.773 "trsvcid": "47008" 00:19:41.773 }, 00:19:41.773 "auth": { 00:19:41.773 "state": "completed", 00:19:41.773 "digest": "sha256", 00:19:41.773 "dhgroup": "ffdhe2048" 00:19:41.773 } 00:19:41.773 } 00:19:41.773 ]' 00:19:41.773 01:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.773 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.773 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.773 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:41.773 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.773 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.773 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.773 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.031 01:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTRiMDQ0MTc4MWZiZDBhMDdlY2ViZDI2M2ZlMmNlY2IyNDY4MDY4MjY2Njk2MzBl5eIxqg==: --dhchap-ctrl-secret DHHC-1:01:ZGZkNDQyZjFmZmZjYWYxZTFkZDZhMTlhNTM1MTc0MTcXN4iq: 00:19:42.966 01:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.966 01:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.966 01:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.966 01:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.966 01:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.966 01:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.966 01:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.966 01:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.224 01:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:43.224 01:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.224 01:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:43.224 01:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:43.224 01:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:43.224 01:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.224 01:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:43.224 01:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.224 01:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.224 01:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.224 01:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.224 01:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.792 00:19:43.792 01:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.792 01:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.792 01:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.049 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.049 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.049 01:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.049 01:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.049 01:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.049 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.049 { 00:19:44.049 "cntlid": 15, 00:19:44.049 "qid": 0, 00:19:44.049 "state": "enabled", 00:19:44.049 "thread": "nvmf_tgt_poll_group_000", 00:19:44.049 "listen_address": { 00:19:44.049 "trtype": "TCP", 00:19:44.049 "adrfam": "IPv4", 00:19:44.049 "traddr": "10.0.0.2", 00:19:44.049 "trsvcid": "4420" 00:19:44.049 }, 00:19:44.049 "peer_address": { 00:19:44.049 "trtype": "TCP", 00:19:44.049 "adrfam": "IPv4", 00:19:44.049 "traddr": "10.0.0.1", 00:19:44.049 "trsvcid": "47034" 00:19:44.049 }, 00:19:44.049 "auth": { 00:19:44.049 "state": "completed", 00:19:44.049 "digest": "sha256", 00:19:44.049 "dhgroup": "ffdhe2048" 00:19:44.049 } 00:19:44.049 } 00:19:44.049 ]' 00:19:44.049 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.049 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.049 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.049 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:44.049 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.049 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.049 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.049 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.306 01:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:19:45.241 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.241 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:45.241 01:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.241 01:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.241 01:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.241 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.241 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.241 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:45.241 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:45.811 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:45.811 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.811 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:45.811 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:45.811 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:45.811 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.811 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.811 01:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.811 01:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.811 01:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.811 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.811 01:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.069 00:19:46.069 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.069 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.069 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.327 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.327 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.327 01:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.327 01:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.327 01:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.327 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.327 { 00:19:46.327 "cntlid": 17, 00:19:46.327 "qid": 0, 00:19:46.327 "state": "enabled", 00:19:46.327 "thread": "nvmf_tgt_poll_group_000", 00:19:46.327 "listen_address": { 00:19:46.327 "trtype": "TCP", 00:19:46.327 "adrfam": "IPv4", 00:19:46.327 "traddr": "10.0.0.2", 00:19:46.327 "trsvcid": "4420" 00:19:46.327 }, 00:19:46.327 "peer_address": { 00:19:46.327 "trtype": "TCP", 00:19:46.327 "adrfam": "IPv4", 00:19:46.327 "traddr": "10.0.0.1", 00:19:46.327 "trsvcid": "47060" 00:19:46.327 }, 00:19:46.327 "auth": { 00:19:46.327 "state": "completed", 00:19:46.327 "digest": "sha256", 00:19:46.327 "dhgroup": "ffdhe3072" 00:19:46.327 } 00:19:46.327 } 00:19:46.327 ]' 00:19:46.327 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.327 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.327 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.327 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:46.327 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.327 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.327 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.327 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.586 01:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:19:47.522 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.522 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.522 01:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.522 01:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.522 01:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.522 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.522 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:47.522 01:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:48.092 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:48.092 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.092 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:48.092 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:48.092 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:48.092 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.092 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.092 01:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.092 01:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.092 01:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.092 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.092 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.350 00:19:48.350 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.350 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.350 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.609 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.609 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.609 01:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.609 01:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.609 01:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.609 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.609 { 00:19:48.609 "cntlid": 19, 00:19:48.609 "qid": 0, 00:19:48.609 "state": "enabled", 00:19:48.609 "thread": "nvmf_tgt_poll_group_000", 00:19:48.609 "listen_address": { 00:19:48.609 "trtype": "TCP", 00:19:48.609 "adrfam": "IPv4", 00:19:48.609 "traddr": "10.0.0.2", 00:19:48.609 "trsvcid": "4420" 00:19:48.609 }, 00:19:48.609 "peer_address": { 00:19:48.609 "trtype": "TCP", 00:19:48.609 "adrfam": "IPv4", 00:19:48.609 "traddr": "10.0.0.1", 00:19:48.609 "trsvcid": "47088" 00:19:48.609 }, 00:19:48.609 "auth": { 00:19:48.609 "state": "completed", 00:19:48.609 "digest": "sha256", 00:19:48.609 "dhgroup": "ffdhe3072" 00:19:48.609 } 00:19:48.609 } 00:19:48.609 ]' 00:19:48.609 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.609 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.609 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.609 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:48.609 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.609 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.609 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.609 01:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.867 01:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjNlOWNhZjFmZmQ4MThkZmZkOTZjZWM4OTk5MjM4OTbGe/9N: --dhchap-ctrl-secret DHHC-1:02:ODUxMTBiNjYyOTZhODRiMThkODA0NWY0NTNjNTliOTU3NWRmY2E0MGRlMmJiZTRl8qaI/g==: 00:19:49.801 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.801 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.801 01:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.801 01:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.801 01:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.801 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.801 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:49.801 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:50.059 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:50.059 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.059 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:50.059 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:50.059 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:50.059 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.059 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.059 01:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.059 01:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.059 01:05:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.059 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.059 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.634 00:19:50.634 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.634 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.634 01:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.634 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.634 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.634 01:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.634 01:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.934 01:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.934 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.934 { 00:19:50.934 "cntlid": 21, 00:19:50.934 "qid": 0, 00:19:50.934 "state": "enabled", 00:19:50.934 "thread": "nvmf_tgt_poll_group_000", 00:19:50.934 "listen_address": { 00:19:50.934 "trtype": "TCP", 00:19:50.934 "adrfam": "IPv4", 00:19:50.934 "traddr": "10.0.0.2", 00:19:50.934 "trsvcid": "4420" 00:19:50.934 }, 00:19:50.934 "peer_address": { 00:19:50.934 "trtype": "TCP", 00:19:50.934 "adrfam": "IPv4", 00:19:50.934 "traddr": "10.0.0.1", 00:19:50.934 "trsvcid": "44600" 00:19:50.934 }, 00:19:50.934 "auth": { 00:19:50.934 "state": "completed", 00:19:50.934 "digest": "sha256", 00:19:50.934 "dhgroup": "ffdhe3072" 00:19:50.934 } 00:19:50.934 } 00:19:50.934 ]' 00:19:50.934 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.934 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.934 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.934 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:50.934 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.934 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.934 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.934 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.192 01:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTRiMDQ0MTc4MWZiZDBhMDdlY2ViZDI2M2ZlMmNlY2IyNDY4MDY4MjY2Njk2MzBl5eIxqg==: --dhchap-ctrl-secret DHHC-1:01:ZGZkNDQyZjFmZmZjYWYxZTFkZDZhMTlhNTM1MTc0MTcXN4iq: 00:19:52.130 01:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.130 01:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.130 01:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.130 01:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.130 01:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.130 01:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.130 01:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:52.130 01:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:52.389 01:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:52.389 01:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.389 01:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:52.389 01:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:52.389 01:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:52.389 01:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.389 01:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:52.389 01:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.389 01:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.389 01:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.389 01:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.389 01:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:52.646 00:19:52.905 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.905 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.905 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.163 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.163 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.163 01:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.163 01:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.163 01:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.163 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.163 { 00:19:53.163 "cntlid": 23, 00:19:53.163 "qid": 0, 00:19:53.163 "state": "enabled", 00:19:53.163 "thread": "nvmf_tgt_poll_group_000", 00:19:53.163 "listen_address": { 00:19:53.163 "trtype": "TCP", 00:19:53.163 "adrfam": "IPv4", 00:19:53.163 "traddr": "10.0.0.2", 00:19:53.163 "trsvcid": "4420" 00:19:53.163 }, 00:19:53.163 "peer_address": { 00:19:53.163 "trtype": "TCP", 00:19:53.163 "adrfam": "IPv4", 00:19:53.163 "traddr": "10.0.0.1", 00:19:53.163 "trsvcid": "44630" 00:19:53.163 }, 00:19:53.163 "auth": { 00:19:53.163 "state": "completed", 00:19:53.163 "digest": "sha256", 00:19:53.163 "dhgroup": "ffdhe3072" 00:19:53.163 } 00:19:53.163 } 00:19:53.163 ]' 00:19:53.163 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.163 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.163 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.163 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:53.163 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.163 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.163 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.163 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.421 01:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:19:54.358 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.358 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.358 01:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.358 01:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.358 01:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.358 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.358 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.358 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.358 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:54.616 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:54.616 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.616 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:54.616 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:54.616 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:54.616 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.616 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.616 01:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.616 01:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.616 01:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.616 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.616 01:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.184 00:19:55.184 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.184 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.184 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.442 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.442 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.442 01:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.442 01:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.442 01:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.442 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.442 { 00:19:55.442 "cntlid": 25, 00:19:55.442 "qid": 0, 00:19:55.442 "state": "enabled", 00:19:55.442 "thread": "nvmf_tgt_poll_group_000", 00:19:55.442 "listen_address": { 00:19:55.442 "trtype": "TCP", 00:19:55.442 "adrfam": "IPv4", 00:19:55.442 "traddr": "10.0.0.2", 00:19:55.442 "trsvcid": "4420" 00:19:55.442 }, 00:19:55.442 "peer_address": { 00:19:55.442 "trtype": "TCP", 00:19:55.442 "adrfam": "IPv4", 00:19:55.442 "traddr": "10.0.0.1", 00:19:55.442 "trsvcid": "44658" 00:19:55.442 }, 00:19:55.442 "auth": { 00:19:55.442 "state": "completed", 00:19:55.442 "digest": "sha256", 00:19:55.442 "dhgroup": "ffdhe4096" 00:19:55.442 } 00:19:55.442 } 00:19:55.442 ]' 00:19:55.442 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.442 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.442 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.442 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:55.442 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.442 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.443 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.443 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.700 01:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:19:56.638 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.638 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.638 01:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.638 01:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.638 01:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.638 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.638 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.638 01:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:56.897 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:56.897 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.897 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:56.897 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:56.897 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:56.897 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.897 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.897 01:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.897 01:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.897 01:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.897 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.897 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.464 00:19:57.464 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.464 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.464 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.464 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.464 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.464 01:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.464 01:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.464 01:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.464 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.464 { 00:19:57.464 "cntlid": 27, 00:19:57.464 "qid": 0, 00:19:57.464 "state": "enabled", 00:19:57.464 "thread": "nvmf_tgt_poll_group_000", 00:19:57.464 "listen_address": { 00:19:57.464 "trtype": "TCP", 00:19:57.464 "adrfam": "IPv4", 00:19:57.464 "traddr": "10.0.0.2", 00:19:57.464 "trsvcid": "4420" 00:19:57.464 }, 00:19:57.464 "peer_address": { 00:19:57.464 "trtype": "TCP", 00:19:57.464 "adrfam": "IPv4", 00:19:57.464 "traddr": "10.0.0.1", 00:19:57.464 "trsvcid": "44678" 00:19:57.464 }, 00:19:57.464 "auth": { 00:19:57.464 "state": "completed", 00:19:57.464 "digest": "sha256", 00:19:57.464 "dhgroup": "ffdhe4096" 00:19:57.464 } 00:19:57.464 } 00:19:57.464 ]' 00:19:57.464 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.722 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.722 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.722 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:57.722 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.722 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.722 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.722 01:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.980 01:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjNlOWNhZjFmZmQ4MThkZmZkOTZjZWM4OTk5MjM4OTbGe/9N: --dhchap-ctrl-secret DHHC-1:02:ODUxMTBiNjYyOTZhODRiMThkODA0NWY0NTNjNTliOTU3NWRmY2E0MGRlMmJiZTRl8qaI/g==: 00:19:58.915 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.915 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.915 01:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.915 01:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.915 01:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.915 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.915 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.915 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:59.173 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:59.173 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.173 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:59.173 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:59.173 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:59.173 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.173 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.173 01:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.173 01:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.173 01:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.173 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.173 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.740 00:19:59.740 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.740 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.740 01:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.999 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.999 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.999 01:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.999 01:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.999 01:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.999 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.999 { 00:19:59.999 "cntlid": 29, 00:19:59.999 "qid": 0, 00:19:59.999 "state": "enabled", 00:19:59.999 "thread": "nvmf_tgt_poll_group_000", 00:19:59.999 "listen_address": { 00:19:59.999 "trtype": "TCP", 00:19:59.999 "adrfam": "IPv4", 00:19:59.999 "traddr": "10.0.0.2", 00:19:59.999 "trsvcid": "4420" 00:19:59.999 }, 00:19:59.999 "peer_address": { 00:19:59.999 "trtype": "TCP", 00:19:59.999 "adrfam": "IPv4", 00:19:59.999 "traddr": "10.0.0.1", 00:19:59.999 "trsvcid": "54606" 00:19:59.999 }, 00:19:59.999 "auth": { 00:19:59.999 "state": "completed", 00:19:59.999 "digest": "sha256", 00:19:59.999 "dhgroup": "ffdhe4096" 00:19:59.999 } 00:19:59.999 } 00:19:59.999 ]' 00:19:59.999 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.999 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.999 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.999 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:59.999 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.999 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.999 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.999 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.258 01:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTRiMDQ0MTc4MWZiZDBhMDdlY2ViZDI2M2ZlMmNlY2IyNDY4MDY4MjY2Njk2MzBl5eIxqg==: --dhchap-ctrl-secret DHHC-1:01:ZGZkNDQyZjFmZmZjYWYxZTFkZDZhMTlhNTM1MTc0MTcXN4iq: 00:20:01.193 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.193 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.193 01:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.193 01:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.193 01:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.193 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.193 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:01.193 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:01.451 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:01.451 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.451 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:01.451 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:01.451 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:01.451 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.451 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:01.451 01:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.451 01:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.451 01:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.451 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.451 01:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.018 00:20:02.018 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.018 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.018 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.018 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.276 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.276 01:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.276 01:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.276 01:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.276 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.276 { 00:20:02.276 "cntlid": 31, 00:20:02.276 "qid": 0, 00:20:02.276 "state": "enabled", 00:20:02.276 "thread": "nvmf_tgt_poll_group_000", 00:20:02.276 "listen_address": { 00:20:02.276 "trtype": "TCP", 00:20:02.276 "adrfam": "IPv4", 00:20:02.276 "traddr": "10.0.0.2", 00:20:02.276 "trsvcid": "4420" 00:20:02.276 }, 00:20:02.276 "peer_address": { 00:20:02.276 "trtype": "TCP", 00:20:02.276 "adrfam": "IPv4", 00:20:02.276 "traddr": "10.0.0.1", 00:20:02.276 "trsvcid": "54626" 00:20:02.276 }, 00:20:02.276 "auth": { 00:20:02.276 "state": "completed", 00:20:02.276 "digest": "sha256", 00:20:02.276 "dhgroup": "ffdhe4096" 00:20:02.276 } 00:20:02.276 } 00:20:02.276 ]' 00:20:02.276 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.276 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.276 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.276 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:02.276 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.276 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.276 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.276 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.535 01:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:20:03.468 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.468 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.468 01:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.468 01:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.468 01:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.468 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.468 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.468 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:03.468 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:03.726 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:03.726 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.726 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:03.726 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:03.726 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:03.726 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.726 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.726 01:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.726 01:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.726 01:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.726 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.726 01:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.290 00:20:04.290 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.290 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.290 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.548 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.548 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.548 01:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.548 01:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.548 01:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.548 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.548 { 00:20:04.548 "cntlid": 33, 00:20:04.548 "qid": 0, 00:20:04.548 "state": "enabled", 00:20:04.548 "thread": "nvmf_tgt_poll_group_000", 00:20:04.548 "listen_address": { 00:20:04.548 "trtype": "TCP", 00:20:04.548 "adrfam": "IPv4", 00:20:04.548 "traddr": "10.0.0.2", 00:20:04.548 "trsvcid": "4420" 00:20:04.548 }, 00:20:04.548 "peer_address": { 00:20:04.548 "trtype": "TCP", 00:20:04.548 "adrfam": "IPv4", 00:20:04.548 "traddr": "10.0.0.1", 00:20:04.548 "trsvcid": "54668" 00:20:04.548 }, 00:20:04.548 "auth": { 00:20:04.548 "state": "completed", 00:20:04.548 "digest": "sha256", 00:20:04.548 "dhgroup": "ffdhe6144" 00:20:04.548 } 00:20:04.548 } 00:20:04.548 ]' 00:20:04.548 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.548 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.548 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.548 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:04.548 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.548 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.548 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.548 01:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.806 01:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:20:05.739 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.739 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.739 01:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.739 01:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.739 01:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.739 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.739 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.739 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:06.028 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:06.028 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.028 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:06.028 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:06.028 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:06.028 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.028 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.028 01:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.028 01:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.028 01:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.028 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.028 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.593 00:20:06.593 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.593 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.593 01:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.851 01:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.851 01:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.851 01:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.851 01:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.851 01:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.851 01:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.851 { 00:20:06.851 "cntlid": 35, 00:20:06.851 "qid": 0, 00:20:06.851 "state": "enabled", 00:20:06.851 "thread": "nvmf_tgt_poll_group_000", 00:20:06.851 "listen_address": { 00:20:06.851 "trtype": "TCP", 00:20:06.851 "adrfam": "IPv4", 00:20:06.851 "traddr": "10.0.0.2", 00:20:06.851 "trsvcid": "4420" 00:20:06.851 }, 00:20:06.851 "peer_address": { 00:20:06.851 "trtype": "TCP", 00:20:06.851 "adrfam": "IPv4", 00:20:06.851 "traddr": "10.0.0.1", 00:20:06.851 "trsvcid": "54696" 00:20:06.851 }, 00:20:06.851 "auth": { 00:20:06.851 "state": "completed", 00:20:06.851 "digest": "sha256", 00:20:06.851 "dhgroup": "ffdhe6144" 00:20:06.851 } 00:20:06.851 } 00:20:06.851 ]' 00:20:06.851 01:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.851 01:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.851 01:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.851 01:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:06.851 01:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.110 01:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.110 01:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.110 01:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.369 01:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjNlOWNhZjFmZmQ4MThkZmZkOTZjZWM4OTk5MjM4OTbGe/9N: --dhchap-ctrl-secret DHHC-1:02:ODUxMTBiNjYyOTZhODRiMThkODA0NWY0NTNjNTliOTU3NWRmY2E0MGRlMmJiZTRl8qaI/g==: 00:20:08.303 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.303 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.303 01:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.303 01:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.303 01:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.304 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.304 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:08.304 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:08.560 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:08.560 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.560 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:08.560 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:08.560 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:08.560 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.560 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.560 01:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.560 01:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.560 01:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.561 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.561 01:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.125 00:20:09.125 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.125 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.125 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.382 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.382 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.382 01:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.382 01:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.382 01:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.382 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.382 { 00:20:09.382 "cntlid": 37, 00:20:09.382 "qid": 0, 00:20:09.382 "state": "enabled", 00:20:09.382 "thread": "nvmf_tgt_poll_group_000", 00:20:09.382 "listen_address": { 00:20:09.382 "trtype": "TCP", 00:20:09.382 "adrfam": "IPv4", 00:20:09.382 "traddr": "10.0.0.2", 00:20:09.382 "trsvcid": "4420" 00:20:09.382 }, 00:20:09.382 "peer_address": { 00:20:09.382 "trtype": "TCP", 00:20:09.382 "adrfam": "IPv4", 00:20:09.382 "traddr": "10.0.0.1", 00:20:09.382 "trsvcid": "47892" 00:20:09.382 }, 00:20:09.382 "auth": { 00:20:09.382 "state": "completed", 00:20:09.382 "digest": "sha256", 00:20:09.382 "dhgroup": "ffdhe6144" 00:20:09.382 } 00:20:09.382 } 00:20:09.382 ]' 00:20:09.382 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.382 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.382 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.382 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:09.382 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.640 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.640 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.640 01:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.897 01:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTRiMDQ0MTc4MWZiZDBhMDdlY2ViZDI2M2ZlMmNlY2IyNDY4MDY4MjY2Njk2MzBl5eIxqg==: --dhchap-ctrl-secret DHHC-1:01:ZGZkNDQyZjFmZmZjYWYxZTFkZDZhMTlhNTM1MTc0MTcXN4iq: 00:20:10.829 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.829 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.829 01:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.829 01:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.829 01:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.829 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.829 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:10.829 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:11.086 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:11.086 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.086 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:11.086 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:11.086 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:11.086 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.086 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:11.086 01:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.086 01:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.086 01:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.086 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:11.086 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:11.692 00:20:11.692 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.692 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.692 01:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.949 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.949 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.949 01:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.949 01:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.950 01:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.950 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.950 { 00:20:11.950 "cntlid": 39, 00:20:11.950 "qid": 0, 00:20:11.950 "state": "enabled", 00:20:11.950 "thread": "nvmf_tgt_poll_group_000", 00:20:11.950 "listen_address": { 00:20:11.950 "trtype": "TCP", 00:20:11.950 "adrfam": "IPv4", 00:20:11.950 "traddr": "10.0.0.2", 00:20:11.950 "trsvcid": "4420" 00:20:11.950 }, 00:20:11.950 "peer_address": { 00:20:11.950 "trtype": "TCP", 00:20:11.950 "adrfam": "IPv4", 00:20:11.950 "traddr": "10.0.0.1", 00:20:11.950 "trsvcid": "47934" 00:20:11.950 }, 00:20:11.950 "auth": { 00:20:11.950 "state": "completed", 00:20:11.950 "digest": "sha256", 00:20:11.950 "dhgroup": "ffdhe6144" 00:20:11.950 } 00:20:11.950 } 00:20:11.950 ]' 00:20:11.950 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.950 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.950 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.950 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:11.950 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.950 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.950 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.950 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.208 01:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:20:13.140 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.140 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.140 01:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.140 01:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.140 01:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.140 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.140 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.140 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:13.140 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:13.397 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:13.397 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.397 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:13.397 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:13.397 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:13.397 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.397 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.397 01:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.397 01:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.398 01:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.398 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.398 01:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.331 00:20:14.331 01:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.331 01:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.331 01:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.588 01:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.588 01:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.588 01:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.588 01:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.588 01:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.588 01:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.588 { 00:20:14.588 "cntlid": 41, 00:20:14.588 "qid": 0, 00:20:14.588 "state": "enabled", 00:20:14.588 "thread": "nvmf_tgt_poll_group_000", 00:20:14.588 "listen_address": { 00:20:14.588 "trtype": "TCP", 00:20:14.588 "adrfam": "IPv4", 00:20:14.588 "traddr": "10.0.0.2", 00:20:14.588 "trsvcid": "4420" 00:20:14.588 }, 00:20:14.588 "peer_address": { 00:20:14.588 "trtype": "TCP", 00:20:14.588 "adrfam": "IPv4", 00:20:14.588 "traddr": "10.0.0.1", 00:20:14.588 "trsvcid": "47950" 00:20:14.588 }, 00:20:14.588 "auth": { 00:20:14.588 "state": "completed", 00:20:14.588 "digest": "sha256", 00:20:14.588 "dhgroup": "ffdhe8192" 00:20:14.588 } 00:20:14.588 } 00:20:14.588 ]' 00:20:14.588 01:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.846 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.846 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.846 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:14.846 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.846 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.846 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.846 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.103 01:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:20:16.036 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.036 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.036 01:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.036 01:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.036 01:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.036 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.036 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:16.036 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:16.293 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:16.293 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.293 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:16.293 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:16.293 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:16.293 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.293 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.293 01:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.293 01:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.293 01:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.293 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.293 01:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.226 00:20:17.226 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.226 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.226 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.484 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.484 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.484 01:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.484 01:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.484 01:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.484 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.484 { 00:20:17.484 "cntlid": 43, 00:20:17.484 "qid": 0, 00:20:17.484 "state": "enabled", 00:20:17.484 "thread": "nvmf_tgt_poll_group_000", 00:20:17.484 "listen_address": { 00:20:17.484 "trtype": "TCP", 00:20:17.484 "adrfam": "IPv4", 00:20:17.484 "traddr": "10.0.0.2", 00:20:17.484 "trsvcid": "4420" 00:20:17.484 }, 00:20:17.484 "peer_address": { 00:20:17.484 "trtype": "TCP", 00:20:17.484 "adrfam": "IPv4", 00:20:17.484 "traddr": "10.0.0.1", 00:20:17.484 "trsvcid": "47974" 00:20:17.484 }, 00:20:17.484 "auth": { 00:20:17.484 "state": "completed", 00:20:17.484 "digest": "sha256", 00:20:17.484 "dhgroup": "ffdhe8192" 00:20:17.484 } 00:20:17.484 } 00:20:17.484 ]' 00:20:17.484 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.484 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.484 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.484 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:17.484 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.742 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.742 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.742 01:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.000 01:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjNlOWNhZjFmZmQ4MThkZmZkOTZjZWM4OTk5MjM4OTbGe/9N: --dhchap-ctrl-secret DHHC-1:02:ODUxMTBiNjYyOTZhODRiMThkODA0NWY0NTNjNTliOTU3NWRmY2E0MGRlMmJiZTRl8qaI/g==: 00:20:18.935 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.935 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.935 01:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.935 01:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.935 01:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.935 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.935 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:18.935 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:19.193 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:19.193 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.193 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:19.193 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:19.193 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:19.193 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.193 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.193 01:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.193 01:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.193 01:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.193 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.193 01:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.126 00:20:20.126 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.126 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.126 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.384 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.384 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.384 01:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.384 01:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.384 01:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.384 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.384 { 00:20:20.384 "cntlid": 45, 00:20:20.384 "qid": 0, 00:20:20.384 "state": "enabled", 00:20:20.384 "thread": "nvmf_tgt_poll_group_000", 00:20:20.384 "listen_address": { 00:20:20.384 "trtype": "TCP", 00:20:20.384 "adrfam": "IPv4", 00:20:20.384 "traddr": "10.0.0.2", 00:20:20.384 "trsvcid": "4420" 00:20:20.384 }, 00:20:20.384 "peer_address": { 00:20:20.384 "trtype": "TCP", 00:20:20.384 "adrfam": "IPv4", 00:20:20.384 "traddr": "10.0.0.1", 00:20:20.384 "trsvcid": "54782" 00:20:20.384 }, 00:20:20.384 "auth": { 00:20:20.384 "state": "completed", 00:20:20.384 "digest": "sha256", 00:20:20.384 "dhgroup": "ffdhe8192" 00:20:20.384 } 00:20:20.384 } 00:20:20.384 ]' 00:20:20.384 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.384 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.384 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.384 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:20.384 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.384 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.384 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.384 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.643 01:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTRiMDQ0MTc4MWZiZDBhMDdlY2ViZDI2M2ZlMmNlY2IyNDY4MDY4MjY2Njk2MzBl5eIxqg==: --dhchap-ctrl-secret DHHC-1:01:ZGZkNDQyZjFmZmZjYWYxZTFkZDZhMTlhNTM1MTc0MTcXN4iq: 00:20:21.585 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.585 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.585 01:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.585 01:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.585 01:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.585 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.585 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:21.585 01:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:21.843 01:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:21.843 01:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.843 01:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:21.843 01:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:21.843 01:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:21.843 01:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.843 01:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:21.843 01:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.843 01:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.843 01:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.843 01:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.843 01:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:22.776 00:20:22.776 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.776 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.776 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.034 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.034 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.034 01:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.034 01:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.034 01:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.034 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.034 { 00:20:23.034 "cntlid": 47, 00:20:23.034 "qid": 0, 00:20:23.034 "state": "enabled", 00:20:23.034 "thread": "nvmf_tgt_poll_group_000", 00:20:23.034 "listen_address": { 00:20:23.034 "trtype": "TCP", 00:20:23.034 "adrfam": "IPv4", 00:20:23.034 "traddr": "10.0.0.2", 00:20:23.034 "trsvcid": "4420" 00:20:23.034 }, 00:20:23.034 "peer_address": { 00:20:23.034 "trtype": "TCP", 00:20:23.034 "adrfam": "IPv4", 00:20:23.034 "traddr": "10.0.0.1", 00:20:23.034 "trsvcid": "54806" 00:20:23.034 }, 00:20:23.034 "auth": { 00:20:23.034 "state": "completed", 00:20:23.034 "digest": "sha256", 00:20:23.034 "dhgroup": "ffdhe8192" 00:20:23.034 } 00:20:23.034 } 00:20:23.034 ]' 00:20:23.034 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.034 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.034 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.034 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:23.034 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.293 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.293 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.293 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.551 01:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:20:24.483 01:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.483 01:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.483 01:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.483 01:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.483 01:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.483 01:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:24.483 01:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.483 01:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.483 01:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:24.483 01:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:24.740 01:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:24.740 01:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.740 01:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:24.740 01:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:24.740 01:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:24.740 01:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.741 01:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.741 01:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.741 01:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.741 01:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.741 01:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.741 01:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.997 00:20:24.997 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.997 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.997 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.255 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.255 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.255 01:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.255 01:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.255 01:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.255 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.255 { 00:20:25.255 "cntlid": 49, 00:20:25.255 "qid": 0, 00:20:25.255 "state": "enabled", 00:20:25.255 "thread": "nvmf_tgt_poll_group_000", 00:20:25.255 "listen_address": { 00:20:25.255 "trtype": "TCP", 00:20:25.255 "adrfam": "IPv4", 00:20:25.255 "traddr": "10.0.0.2", 00:20:25.255 "trsvcid": "4420" 00:20:25.255 }, 00:20:25.255 "peer_address": { 00:20:25.255 "trtype": "TCP", 00:20:25.255 "adrfam": "IPv4", 00:20:25.255 "traddr": "10.0.0.1", 00:20:25.255 "trsvcid": "54844" 00:20:25.255 }, 00:20:25.255 "auth": { 00:20:25.255 "state": "completed", 00:20:25.255 "digest": "sha384", 00:20:25.255 "dhgroup": "null" 00:20:25.255 } 00:20:25.255 } 00:20:25.255 ]' 00:20:25.255 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.255 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.255 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.255 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:25.255 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.513 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.513 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.513 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.772 01:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:20:26.707 01:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.707 01:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.707 01:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.707 01:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.707 01:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.707 01:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.707 01:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:26.707 01:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:26.975 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:26.976 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.976 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:26.976 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:26.976 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:26.976 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.976 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.976 01:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.976 01:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.976 01:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.976 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.976 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.238 00:20:27.238 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.238 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.238 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.495 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.495 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.495 01:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.495 01:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.495 01:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.495 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.495 { 00:20:27.495 "cntlid": 51, 00:20:27.495 "qid": 0, 00:20:27.495 "state": "enabled", 00:20:27.495 "thread": "nvmf_tgt_poll_group_000", 00:20:27.495 "listen_address": { 00:20:27.496 "trtype": "TCP", 00:20:27.496 "adrfam": "IPv4", 00:20:27.496 "traddr": "10.0.0.2", 00:20:27.496 "trsvcid": "4420" 00:20:27.496 }, 00:20:27.496 "peer_address": { 00:20:27.496 "trtype": "TCP", 00:20:27.496 "adrfam": "IPv4", 00:20:27.496 "traddr": "10.0.0.1", 00:20:27.496 "trsvcid": "54866" 00:20:27.496 }, 00:20:27.496 "auth": { 00:20:27.496 "state": "completed", 00:20:27.496 "digest": "sha384", 00:20:27.496 "dhgroup": "null" 00:20:27.496 } 00:20:27.496 } 00:20:27.496 ]' 00:20:27.496 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.496 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.496 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.496 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:27.496 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.496 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.496 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.496 01:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.753 01:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjNlOWNhZjFmZmQ4MThkZmZkOTZjZWM4OTk5MjM4OTbGe/9N: --dhchap-ctrl-secret DHHC-1:02:ODUxMTBiNjYyOTZhODRiMThkODA0NWY0NTNjNTliOTU3NWRmY2E0MGRlMmJiZTRl8qaI/g==: 00:20:28.685 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.685 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.685 01:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.685 01:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.685 01:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.685 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.685 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:28.685 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:28.943 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:28.943 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.943 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:28.943 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:28.943 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:28.943 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.943 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.943 01:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.943 01:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.943 01:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.943 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.943 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.199 00:20:29.457 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.457 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.457 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.457 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.457 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.457 01:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.457 01:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.457 01:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.457 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.457 { 00:20:29.457 "cntlid": 53, 00:20:29.457 "qid": 0, 00:20:29.457 "state": "enabled", 00:20:29.457 "thread": "nvmf_tgt_poll_group_000", 00:20:29.457 "listen_address": { 00:20:29.457 "trtype": "TCP", 00:20:29.457 "adrfam": "IPv4", 00:20:29.457 "traddr": "10.0.0.2", 00:20:29.457 "trsvcid": "4420" 00:20:29.457 }, 00:20:29.457 "peer_address": { 00:20:29.457 "trtype": "TCP", 00:20:29.457 "adrfam": "IPv4", 00:20:29.457 "traddr": "10.0.0.1", 00:20:29.457 "trsvcid": "46908" 00:20:29.457 }, 00:20:29.457 "auth": { 00:20:29.457 "state": "completed", 00:20:29.457 "digest": "sha384", 00:20:29.457 "dhgroup": "null" 00:20:29.457 } 00:20:29.457 } 00:20:29.457 ]' 00:20:29.457 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.715 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.715 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.715 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:29.715 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.715 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.715 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.715 01:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.972 01:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTRiMDQ0MTc4MWZiZDBhMDdlY2ViZDI2M2ZlMmNlY2IyNDY4MDY4MjY2Njk2MzBl5eIxqg==: --dhchap-ctrl-secret DHHC-1:01:ZGZkNDQyZjFmZmZjYWYxZTFkZDZhMTlhNTM1MTc0MTcXN4iq: 00:20:30.904 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.904 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.904 01:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.904 01:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.904 01:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.904 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.904 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:30.904 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:31.161 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:31.161 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.161 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:31.161 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:31.161 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:31.161 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.161 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:31.161 01:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.161 01:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.161 01:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.161 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:31.161 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:31.418 00:20:31.418 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.418 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.419 01:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.676 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.676 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.676 01:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.676 01:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.676 01:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.676 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.676 { 00:20:31.676 "cntlid": 55, 00:20:31.676 "qid": 0, 00:20:31.676 "state": "enabled", 00:20:31.676 "thread": "nvmf_tgt_poll_group_000", 00:20:31.676 "listen_address": { 00:20:31.676 "trtype": "TCP", 00:20:31.676 "adrfam": "IPv4", 00:20:31.676 "traddr": "10.0.0.2", 00:20:31.676 "trsvcid": "4420" 00:20:31.676 }, 00:20:31.676 "peer_address": { 00:20:31.676 "trtype": "TCP", 00:20:31.676 "adrfam": "IPv4", 00:20:31.676 "traddr": "10.0.0.1", 00:20:31.676 "trsvcid": "46944" 00:20:31.676 }, 00:20:31.676 "auth": { 00:20:31.676 "state": "completed", 00:20:31.676 "digest": "sha384", 00:20:31.676 "dhgroup": "null" 00:20:31.676 } 00:20:31.676 } 00:20:31.676 ]' 00:20:31.676 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.676 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.676 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.933 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:31.933 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.933 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.933 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.933 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.190 01:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:20:33.122 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.122 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.122 01:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.122 01:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.122 01:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.122 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.123 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.123 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:33.123 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:33.380 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:33.380 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.380 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:33.380 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:33.380 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:33.380 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.380 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.380 01:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.380 01:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.380 01:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.380 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.380 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.638 00:20:33.638 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.638 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.638 01:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.896 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.896 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.896 01:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.896 01:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.896 01:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.896 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.896 { 00:20:33.896 "cntlid": 57, 00:20:33.896 "qid": 0, 00:20:33.896 "state": "enabled", 00:20:33.896 "thread": "nvmf_tgt_poll_group_000", 00:20:33.896 "listen_address": { 00:20:33.896 "trtype": "TCP", 00:20:33.896 "adrfam": "IPv4", 00:20:33.896 "traddr": "10.0.0.2", 00:20:33.896 "trsvcid": "4420" 00:20:33.896 }, 00:20:33.896 "peer_address": { 00:20:33.896 "trtype": "TCP", 00:20:33.896 "adrfam": "IPv4", 00:20:33.896 "traddr": "10.0.0.1", 00:20:33.896 "trsvcid": "46956" 00:20:33.896 }, 00:20:33.896 "auth": { 00:20:33.896 "state": "completed", 00:20:33.896 "digest": "sha384", 00:20:33.896 "dhgroup": "ffdhe2048" 00:20:33.896 } 00:20:33.896 } 00:20:33.896 ]' 00:20:33.896 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.896 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.896 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.896 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:33.896 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.896 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.896 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.896 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.153 01:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.521 01:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.779 00:20:35.779 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.779 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.779 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.036 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.036 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.036 01:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.036 01:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.036 01:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.036 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.036 { 00:20:36.036 "cntlid": 59, 00:20:36.036 "qid": 0, 00:20:36.036 "state": "enabled", 00:20:36.036 "thread": "nvmf_tgt_poll_group_000", 00:20:36.036 "listen_address": { 00:20:36.036 "trtype": "TCP", 00:20:36.036 "adrfam": "IPv4", 00:20:36.036 "traddr": "10.0.0.2", 00:20:36.036 "trsvcid": "4420" 00:20:36.036 }, 00:20:36.036 "peer_address": { 00:20:36.036 "trtype": "TCP", 00:20:36.036 "adrfam": "IPv4", 00:20:36.036 "traddr": "10.0.0.1", 00:20:36.036 "trsvcid": "46986" 00:20:36.036 }, 00:20:36.036 "auth": { 00:20:36.036 "state": "completed", 00:20:36.036 "digest": "sha384", 00:20:36.036 "dhgroup": "ffdhe2048" 00:20:36.036 } 00:20:36.036 } 00:20:36.036 ]' 00:20:36.036 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.036 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.036 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.292 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:36.292 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.292 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.292 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.292 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.578 01:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjNlOWNhZjFmZmQ4MThkZmZkOTZjZWM4OTk5MjM4OTbGe/9N: --dhchap-ctrl-secret DHHC-1:02:ODUxMTBiNjYyOTZhODRiMThkODA0NWY0NTNjNTliOTU3NWRmY2E0MGRlMmJiZTRl8qaI/g==: 00:20:37.512 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.512 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.512 01:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.512 01:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.512 01:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.512 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.512 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:37.512 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:37.771 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:37.771 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.771 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:37.771 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:37.771 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:37.771 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.771 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.771 01:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.771 01:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.771 01:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.771 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.771 01:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.029 00:20:38.029 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.029 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.029 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.287 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.287 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.287 01:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.287 01:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.287 01:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.287 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.287 { 00:20:38.287 "cntlid": 61, 00:20:38.287 "qid": 0, 00:20:38.287 "state": "enabled", 00:20:38.287 "thread": "nvmf_tgt_poll_group_000", 00:20:38.287 "listen_address": { 00:20:38.287 "trtype": "TCP", 00:20:38.287 "adrfam": "IPv4", 00:20:38.287 "traddr": "10.0.0.2", 00:20:38.287 "trsvcid": "4420" 00:20:38.287 }, 00:20:38.287 "peer_address": { 00:20:38.287 "trtype": "TCP", 00:20:38.287 "adrfam": "IPv4", 00:20:38.287 "traddr": "10.0.0.1", 00:20:38.287 "trsvcid": "47012" 00:20:38.287 }, 00:20:38.287 "auth": { 00:20:38.287 "state": "completed", 00:20:38.287 "digest": "sha384", 00:20:38.287 "dhgroup": "ffdhe2048" 00:20:38.287 } 00:20:38.287 } 00:20:38.287 ]' 00:20:38.287 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.287 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.287 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.287 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:38.287 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.287 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.287 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.287 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.544 01:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTRiMDQ0MTc4MWZiZDBhMDdlY2ViZDI2M2ZlMmNlY2IyNDY4MDY4MjY2Njk2MzBl5eIxqg==: --dhchap-ctrl-secret DHHC-1:01:ZGZkNDQyZjFmZmZjYWYxZTFkZDZhMTlhNTM1MTc0MTcXN4iq: 00:20:39.476 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.476 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.476 01:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.476 01:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.476 01:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.476 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.476 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:39.476 01:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:39.734 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:39.734 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.734 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:39.734 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:39.734 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:39.734 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.734 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:39.734 01:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.734 01:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.734 01:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.734 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.734 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.992 00:20:40.250 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.250 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.250 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.508 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.508 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.508 01:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.508 01:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.508 01:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.508 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.508 { 00:20:40.508 "cntlid": 63, 00:20:40.508 "qid": 0, 00:20:40.508 "state": "enabled", 00:20:40.508 "thread": "nvmf_tgt_poll_group_000", 00:20:40.508 "listen_address": { 00:20:40.508 "trtype": "TCP", 00:20:40.508 "adrfam": "IPv4", 00:20:40.508 "traddr": "10.0.0.2", 00:20:40.508 "trsvcid": "4420" 00:20:40.508 }, 00:20:40.508 "peer_address": { 00:20:40.508 "trtype": "TCP", 00:20:40.508 "adrfam": "IPv4", 00:20:40.508 "traddr": "10.0.0.1", 00:20:40.508 "trsvcid": "35124" 00:20:40.508 }, 00:20:40.508 "auth": { 00:20:40.508 "state": "completed", 00:20:40.508 "digest": "sha384", 00:20:40.508 "dhgroup": "ffdhe2048" 00:20:40.508 } 00:20:40.508 } 00:20:40.508 ]' 00:20:40.508 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.508 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.508 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.508 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:40.508 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.508 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.508 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.508 01:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.766 01:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:20:41.699 01:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.699 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.699 01:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.699 01:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.699 01:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.699 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.699 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.699 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:41.699 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:41.956 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:41.956 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.956 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.956 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:41.956 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:41.956 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.956 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.956 01:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.956 01:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.956 01:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.956 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.956 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.519 00:20:42.519 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.519 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.519 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.519 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.519 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.519 01:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.519 01:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.777 01:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.777 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.777 { 00:20:42.777 "cntlid": 65, 00:20:42.777 "qid": 0, 00:20:42.777 "state": "enabled", 00:20:42.777 "thread": "nvmf_tgt_poll_group_000", 00:20:42.777 "listen_address": { 00:20:42.777 "trtype": "TCP", 00:20:42.777 "adrfam": "IPv4", 00:20:42.777 "traddr": "10.0.0.2", 00:20:42.777 "trsvcid": "4420" 00:20:42.777 }, 00:20:42.777 "peer_address": { 00:20:42.777 "trtype": "TCP", 00:20:42.777 "adrfam": "IPv4", 00:20:42.777 "traddr": "10.0.0.1", 00:20:42.777 "trsvcid": "35146" 00:20:42.777 }, 00:20:42.777 "auth": { 00:20:42.777 "state": "completed", 00:20:42.777 "digest": "sha384", 00:20:42.777 "dhgroup": "ffdhe3072" 00:20:42.777 } 00:20:42.777 } 00:20:42.777 ]' 00:20:42.777 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.777 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.777 01:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.777 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:42.777 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.777 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.777 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.777 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.034 01:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:20:43.968 01:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.968 01:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.968 01:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.968 01:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.968 01:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.968 01:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.968 01:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:43.968 01:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:44.224 01:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:44.224 01:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.224 01:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:44.224 01:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:44.224 01:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:44.224 01:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.224 01:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.224 01:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.224 01:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.224 01:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.224 01:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.224 01:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.481 00:20:44.738 01:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.738 01:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.738 01:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.995 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.995 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.995 01:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.995 01:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.995 01:06:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.995 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.995 { 00:20:44.995 "cntlid": 67, 00:20:44.995 "qid": 0, 00:20:44.995 "state": "enabled", 00:20:44.995 "thread": "nvmf_tgt_poll_group_000", 00:20:44.996 "listen_address": { 00:20:44.996 "trtype": "TCP", 00:20:44.996 "adrfam": "IPv4", 00:20:44.996 "traddr": "10.0.0.2", 00:20:44.996 "trsvcid": "4420" 00:20:44.996 }, 00:20:44.996 "peer_address": { 00:20:44.996 "trtype": "TCP", 00:20:44.996 "adrfam": "IPv4", 00:20:44.996 "traddr": "10.0.0.1", 00:20:44.996 "trsvcid": "35184" 00:20:44.996 }, 00:20:44.996 "auth": { 00:20:44.996 "state": "completed", 00:20:44.996 "digest": "sha384", 00:20:44.996 "dhgroup": "ffdhe3072" 00:20:44.996 } 00:20:44.996 } 00:20:44.996 ]' 00:20:44.996 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.996 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.996 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.996 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:44.996 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.996 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.996 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.996 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.252 01:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjNlOWNhZjFmZmQ4MThkZmZkOTZjZWM4OTk5MjM4OTbGe/9N: --dhchap-ctrl-secret DHHC-1:02:ODUxMTBiNjYyOTZhODRiMThkODA0NWY0NTNjNTliOTU3NWRmY2E0MGRlMmJiZTRl8qaI/g==: 00:20:46.185 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.185 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.185 01:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.185 01:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.185 01:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.185 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.185 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:46.185 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:46.443 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:46.443 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.443 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:46.443 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:46.443 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:46.443 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.443 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.443 01:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.443 01:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.443 01:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.443 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.443 01:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.702 00:20:46.702 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.702 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.702 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.960 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.960 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.960 01:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.960 01:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.960 01:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.960 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.960 { 00:20:46.960 "cntlid": 69, 00:20:46.960 "qid": 0, 00:20:46.960 "state": "enabled", 00:20:46.960 "thread": "nvmf_tgt_poll_group_000", 00:20:46.960 "listen_address": { 00:20:46.960 "trtype": "TCP", 00:20:46.960 "adrfam": "IPv4", 00:20:46.960 "traddr": "10.0.0.2", 00:20:46.960 "trsvcid": "4420" 00:20:46.960 }, 00:20:46.960 "peer_address": { 00:20:46.960 "trtype": "TCP", 00:20:46.960 "adrfam": "IPv4", 00:20:46.960 "traddr": "10.0.0.1", 00:20:46.960 "trsvcid": "35206" 00:20:46.960 }, 00:20:46.960 "auth": { 00:20:46.960 "state": "completed", 00:20:46.960 "digest": "sha384", 00:20:46.960 "dhgroup": "ffdhe3072" 00:20:46.960 } 00:20:46.960 } 00:20:46.960 ]' 00:20:46.960 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.217 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.217 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.217 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:47.217 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.217 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.217 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.217 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.475 01:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTRiMDQ0MTc4MWZiZDBhMDdlY2ViZDI2M2ZlMmNlY2IyNDY4MDY4MjY2Njk2MzBl5eIxqg==: --dhchap-ctrl-secret DHHC-1:01:ZGZkNDQyZjFmZmZjYWYxZTFkZDZhMTlhNTM1MTc0MTcXN4iq: 00:20:48.405 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.405 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.405 01:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.405 01:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.405 01:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.405 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.405 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:48.405 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:48.663 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:48.663 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.663 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:48.663 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:48.663 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:48.663 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.663 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:48.663 01:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.663 01:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.663 01:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.663 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:48.663 01:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:48.920 00:20:48.920 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.920 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.920 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.177 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.177 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.177 01:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.177 01:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.177 01:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.177 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.177 { 00:20:49.177 "cntlid": 71, 00:20:49.177 "qid": 0, 00:20:49.177 "state": "enabled", 00:20:49.177 "thread": "nvmf_tgt_poll_group_000", 00:20:49.177 "listen_address": { 00:20:49.177 "trtype": "TCP", 00:20:49.177 "adrfam": "IPv4", 00:20:49.177 "traddr": "10.0.0.2", 00:20:49.177 "trsvcid": "4420" 00:20:49.177 }, 00:20:49.177 "peer_address": { 00:20:49.177 "trtype": "TCP", 00:20:49.177 "adrfam": "IPv4", 00:20:49.177 "traddr": "10.0.0.1", 00:20:49.177 "trsvcid": "58114" 00:20:49.177 }, 00:20:49.177 "auth": { 00:20:49.177 "state": "completed", 00:20:49.177 "digest": "sha384", 00:20:49.177 "dhgroup": "ffdhe3072" 00:20:49.177 } 00:20:49.177 } 00:20:49.177 ]' 00:20:49.177 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.177 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.177 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.177 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:49.177 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.434 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.434 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.434 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.691 01:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:20:50.621 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.622 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.622 01:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.622 01:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.622 01:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.622 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.622 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.622 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:50.622 01:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:50.879 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:50.879 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.879 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:50.879 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:50.879 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:50.879 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.879 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.879 01:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.879 01:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.879 01:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.879 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.879 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.451 00:20:51.451 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.451 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.451 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.451 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.451 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.451 01:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.451 01:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.451 01:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.451 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.451 { 00:20:51.451 "cntlid": 73, 00:20:51.451 "qid": 0, 00:20:51.451 "state": "enabled", 00:20:51.451 "thread": "nvmf_tgt_poll_group_000", 00:20:51.451 "listen_address": { 00:20:51.451 "trtype": "TCP", 00:20:51.451 "adrfam": "IPv4", 00:20:51.451 "traddr": "10.0.0.2", 00:20:51.451 "trsvcid": "4420" 00:20:51.451 }, 00:20:51.451 "peer_address": { 00:20:51.451 "trtype": "TCP", 00:20:51.451 "adrfam": "IPv4", 00:20:51.451 "traddr": "10.0.0.1", 00:20:51.451 "trsvcid": "58144" 00:20:51.451 }, 00:20:51.451 "auth": { 00:20:51.451 "state": "completed", 00:20:51.451 "digest": "sha384", 00:20:51.451 "dhgroup": "ffdhe4096" 00:20:51.451 } 00:20:51.451 } 00:20:51.451 ]' 00:20:51.451 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.709 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.709 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.709 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:51.709 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.709 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.709 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.709 01:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.967 01:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:20:52.900 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.900 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.900 01:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.900 01:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.900 01:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.900 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.900 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.900 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:53.158 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:53.158 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.158 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:53.158 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:53.158 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:53.158 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.158 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.158 01:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.158 01:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.158 01:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.158 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.158 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.415 00:20:53.415 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.415 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.415 01:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.990 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.990 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.990 01:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.990 01:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.990 01:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.990 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.990 { 00:20:53.990 "cntlid": 75, 00:20:53.990 "qid": 0, 00:20:53.990 "state": "enabled", 00:20:53.990 "thread": "nvmf_tgt_poll_group_000", 00:20:53.990 "listen_address": { 00:20:53.990 "trtype": "TCP", 00:20:53.990 "adrfam": "IPv4", 00:20:53.990 "traddr": "10.0.0.2", 00:20:53.990 "trsvcid": "4420" 00:20:53.990 }, 00:20:53.990 "peer_address": { 00:20:53.990 "trtype": "TCP", 00:20:53.990 "adrfam": "IPv4", 00:20:53.990 "traddr": "10.0.0.1", 00:20:53.990 "trsvcid": "58174" 00:20:53.990 }, 00:20:53.990 "auth": { 00:20:53.990 "state": "completed", 00:20:53.990 "digest": "sha384", 00:20:53.990 "dhgroup": "ffdhe4096" 00:20:53.990 } 00:20:53.990 } 00:20:53.990 ]' 00:20:53.990 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.990 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.990 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.990 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:53.990 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.990 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.990 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.990 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.250 01:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjNlOWNhZjFmZmQ4MThkZmZkOTZjZWM4OTk5MjM4OTbGe/9N: --dhchap-ctrl-secret DHHC-1:02:ODUxMTBiNjYyOTZhODRiMThkODA0NWY0NTNjNTliOTU3NWRmY2E0MGRlMmJiZTRl8qaI/g==: 00:20:55.181 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.181 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.181 01:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.181 01:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.181 01:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.181 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.181 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:55.181 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:55.438 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:55.439 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.439 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:55.439 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:55.439 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:55.439 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.439 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.439 01:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.439 01:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.439 01:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.439 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.439 01:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.004 00:20:56.004 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.004 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.004 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.261 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.261 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.261 01:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.261 01:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.261 01:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.261 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.261 { 00:20:56.261 "cntlid": 77, 00:20:56.261 "qid": 0, 00:20:56.261 "state": "enabled", 00:20:56.261 "thread": "nvmf_tgt_poll_group_000", 00:20:56.261 "listen_address": { 00:20:56.261 "trtype": "TCP", 00:20:56.261 "adrfam": "IPv4", 00:20:56.261 "traddr": "10.0.0.2", 00:20:56.261 "trsvcid": "4420" 00:20:56.261 }, 00:20:56.261 "peer_address": { 00:20:56.261 "trtype": "TCP", 00:20:56.261 "adrfam": "IPv4", 00:20:56.261 "traddr": "10.0.0.1", 00:20:56.261 "trsvcid": "58196" 00:20:56.261 }, 00:20:56.261 "auth": { 00:20:56.261 "state": "completed", 00:20:56.262 "digest": "sha384", 00:20:56.262 "dhgroup": "ffdhe4096" 00:20:56.262 } 00:20:56.262 } 00:20:56.262 ]' 00:20:56.262 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.262 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.262 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.262 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:56.262 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.262 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.262 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.262 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.519 01:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTRiMDQ0MTc4MWZiZDBhMDdlY2ViZDI2M2ZlMmNlY2IyNDY4MDY4MjY2Njk2MzBl5eIxqg==: --dhchap-ctrl-secret DHHC-1:01:ZGZkNDQyZjFmZmZjYWYxZTFkZDZhMTlhNTM1MTc0MTcXN4iq: 00:20:57.450 01:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.450 01:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.450 01:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.450 01:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.450 01:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.450 01:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.450 01:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:57.450 01:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:57.707 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:57.707 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.707 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:57.707 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:57.707 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:57.707 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.707 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:57.707 01:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.707 01:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.707 01:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.707 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:57.707 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.271 00:20:58.271 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.271 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.271 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.529 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.529 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.529 01:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.529 01:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.529 01:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.529 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.529 { 00:20:58.529 "cntlid": 79, 00:20:58.529 "qid": 0, 00:20:58.529 "state": "enabled", 00:20:58.529 "thread": "nvmf_tgt_poll_group_000", 00:20:58.529 "listen_address": { 00:20:58.529 "trtype": "TCP", 00:20:58.529 "adrfam": "IPv4", 00:20:58.529 "traddr": "10.0.0.2", 00:20:58.529 "trsvcid": "4420" 00:20:58.529 }, 00:20:58.529 "peer_address": { 00:20:58.529 "trtype": "TCP", 00:20:58.529 "adrfam": "IPv4", 00:20:58.529 "traddr": "10.0.0.1", 00:20:58.529 "trsvcid": "58220" 00:20:58.529 }, 00:20:58.529 "auth": { 00:20:58.529 "state": "completed", 00:20:58.529 "digest": "sha384", 00:20:58.529 "dhgroup": "ffdhe4096" 00:20:58.529 } 00:20:58.529 } 00:20:58.529 ]' 00:20:58.529 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.529 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.529 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.529 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:58.529 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.529 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.529 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.529 01:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.786 01:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:20:59.718 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.976 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.976 01:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.976 01:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.976 01:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.976 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.976 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.976 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:59.976 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:00.233 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:00.233 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.233 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:00.233 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:00.233 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:00.233 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.233 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.233 01:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.233 01:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.233 01:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.233 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.233 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.798 00:21:00.798 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.798 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.798 01:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.055 01:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.055 01:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.055 01:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.055 01:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.055 01:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.055 01:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.055 { 00:21:01.055 "cntlid": 81, 00:21:01.055 "qid": 0, 00:21:01.055 "state": "enabled", 00:21:01.055 "thread": "nvmf_tgt_poll_group_000", 00:21:01.055 "listen_address": { 00:21:01.055 "trtype": "TCP", 00:21:01.055 "adrfam": "IPv4", 00:21:01.055 "traddr": "10.0.0.2", 00:21:01.055 "trsvcid": "4420" 00:21:01.055 }, 00:21:01.055 "peer_address": { 00:21:01.055 "trtype": "TCP", 00:21:01.055 "adrfam": "IPv4", 00:21:01.055 "traddr": "10.0.0.1", 00:21:01.055 "trsvcid": "57310" 00:21:01.055 }, 00:21:01.055 "auth": { 00:21:01.055 "state": "completed", 00:21:01.055 "digest": "sha384", 00:21:01.055 "dhgroup": "ffdhe6144" 00:21:01.055 } 00:21:01.055 } 00:21:01.055 ]' 00:21:01.055 01:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.055 01:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.055 01:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.055 01:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:01.055 01:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.055 01:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.055 01:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.055 01:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.313 01:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:21:02.246 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.246 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.246 01:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.246 01:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.246 01:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.246 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.246 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:02.246 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:02.503 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:02.503 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.503 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:02.503 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:02.503 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:02.503 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.503 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.503 01:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.503 01:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.503 01:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.503 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.503 01:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.069 00:21:03.069 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.069 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.069 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.327 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.327 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.327 01:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.327 01:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.327 01:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.327 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.327 { 00:21:03.327 "cntlid": 83, 00:21:03.327 "qid": 0, 00:21:03.327 "state": "enabled", 00:21:03.327 "thread": "nvmf_tgt_poll_group_000", 00:21:03.327 "listen_address": { 00:21:03.327 "trtype": "TCP", 00:21:03.327 "adrfam": "IPv4", 00:21:03.327 "traddr": "10.0.0.2", 00:21:03.327 "trsvcid": "4420" 00:21:03.327 }, 00:21:03.327 "peer_address": { 00:21:03.327 "trtype": "TCP", 00:21:03.327 "adrfam": "IPv4", 00:21:03.327 "traddr": "10.0.0.1", 00:21:03.327 "trsvcid": "57326" 00:21:03.327 }, 00:21:03.327 "auth": { 00:21:03.327 "state": "completed", 00:21:03.327 "digest": "sha384", 00:21:03.327 "dhgroup": "ffdhe6144" 00:21:03.327 } 00:21:03.327 } 00:21:03.327 ]' 00:21:03.327 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.585 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.585 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.585 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:03.585 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.585 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.585 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.585 01:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.843 01:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjNlOWNhZjFmZmQ4MThkZmZkOTZjZWM4OTk5MjM4OTbGe/9N: --dhchap-ctrl-secret DHHC-1:02:ODUxMTBiNjYyOTZhODRiMThkODA0NWY0NTNjNTliOTU3NWRmY2E0MGRlMmJiZTRl8qaI/g==: 00:21:04.771 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.771 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.771 01:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.771 01:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.771 01:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.771 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.771 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:04.771 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:05.028 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:05.028 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.028 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:05.028 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:05.028 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:05.028 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.028 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.028 01:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.028 01:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.028 01:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.028 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.028 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.591 00:21:05.591 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.591 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.591 01:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.847 01:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.847 01:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.847 01:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.847 01:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.847 01:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.847 01:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.847 { 00:21:05.847 "cntlid": 85, 00:21:05.847 "qid": 0, 00:21:05.847 "state": "enabled", 00:21:05.847 "thread": "nvmf_tgt_poll_group_000", 00:21:05.847 "listen_address": { 00:21:05.847 "trtype": "TCP", 00:21:05.847 "adrfam": "IPv4", 00:21:05.847 "traddr": "10.0.0.2", 00:21:05.847 "trsvcid": "4420" 00:21:05.847 }, 00:21:05.847 "peer_address": { 00:21:05.847 "trtype": "TCP", 00:21:05.847 "adrfam": "IPv4", 00:21:05.847 "traddr": "10.0.0.1", 00:21:05.847 "trsvcid": "57352" 00:21:05.847 }, 00:21:05.847 "auth": { 00:21:05.847 "state": "completed", 00:21:05.847 "digest": "sha384", 00:21:05.847 "dhgroup": "ffdhe6144" 00:21:05.847 } 00:21:05.847 } 00:21:05.847 ]' 00:21:05.847 01:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.847 01:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.847 01:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.847 01:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:05.847 01:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.847 01:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.848 01:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.848 01:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.104 01:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTRiMDQ0MTc4MWZiZDBhMDdlY2ViZDI2M2ZlMmNlY2IyNDY4MDY4MjY2Njk2MzBl5eIxqg==: --dhchap-ctrl-secret DHHC-1:01:ZGZkNDQyZjFmZmZjYWYxZTFkZDZhMTlhNTM1MTc0MTcXN4iq: 00:21:07.079 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.079 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.079 01:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.079 01:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.079 01:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.079 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.079 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.079 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.337 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:07.337 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.337 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:07.337 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:07.337 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:07.337 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.337 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:07.337 01:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.337 01:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.337 01:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.337 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:07.337 01:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:07.901 00:21:08.159 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.159 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.159 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.417 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.417 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.417 01:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.417 01:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.417 01:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.417 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.417 { 00:21:08.417 "cntlid": 87, 00:21:08.417 "qid": 0, 00:21:08.417 "state": "enabled", 00:21:08.417 "thread": "nvmf_tgt_poll_group_000", 00:21:08.417 "listen_address": { 00:21:08.417 "trtype": "TCP", 00:21:08.417 "adrfam": "IPv4", 00:21:08.417 "traddr": "10.0.0.2", 00:21:08.417 "trsvcid": "4420" 00:21:08.417 }, 00:21:08.417 "peer_address": { 00:21:08.417 "trtype": "TCP", 00:21:08.417 "adrfam": "IPv4", 00:21:08.417 "traddr": "10.0.0.1", 00:21:08.417 "trsvcid": "57384" 00:21:08.417 }, 00:21:08.417 "auth": { 00:21:08.417 "state": "completed", 00:21:08.417 "digest": "sha384", 00:21:08.417 "dhgroup": "ffdhe6144" 00:21:08.417 } 00:21:08.417 } 00:21:08.417 ]' 00:21:08.417 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.417 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.417 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.417 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:08.417 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.417 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.417 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.417 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.675 01:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:21:09.608 01:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.608 01:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.608 01:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.608 01:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.608 01:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.608 01:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.608 01:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.608 01:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:09.608 01:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:09.866 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:09.866 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.866 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:09.866 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:09.866 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:09.866 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.866 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.866 01:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.866 01:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.866 01:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.866 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.866 01:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.800 00:21:10.800 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.800 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.800 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.058 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.058 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.058 01:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.058 01:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.058 01:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.058 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.058 { 00:21:11.058 "cntlid": 89, 00:21:11.058 "qid": 0, 00:21:11.058 "state": "enabled", 00:21:11.058 "thread": "nvmf_tgt_poll_group_000", 00:21:11.058 "listen_address": { 00:21:11.058 "trtype": "TCP", 00:21:11.058 "adrfam": "IPv4", 00:21:11.058 "traddr": "10.0.0.2", 00:21:11.058 "trsvcid": "4420" 00:21:11.058 }, 00:21:11.058 "peer_address": { 00:21:11.058 "trtype": "TCP", 00:21:11.058 "adrfam": "IPv4", 00:21:11.058 "traddr": "10.0.0.1", 00:21:11.058 "trsvcid": "52514" 00:21:11.058 }, 00:21:11.058 "auth": { 00:21:11.058 "state": "completed", 00:21:11.058 "digest": "sha384", 00:21:11.058 "dhgroup": "ffdhe8192" 00:21:11.058 } 00:21:11.058 } 00:21:11.058 ]' 00:21:11.058 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.058 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.058 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.058 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:11.058 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.058 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.058 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.058 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.315 01:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.690 01:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.620 00:21:13.620 01:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.620 01:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.620 01:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.879 01:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.879 01:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.879 01:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.879 01:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.879 01:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.879 01:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.879 { 00:21:13.879 "cntlid": 91, 00:21:13.879 "qid": 0, 00:21:13.879 "state": "enabled", 00:21:13.879 "thread": "nvmf_tgt_poll_group_000", 00:21:13.879 "listen_address": { 00:21:13.879 "trtype": "TCP", 00:21:13.879 "adrfam": "IPv4", 00:21:13.879 "traddr": "10.0.0.2", 00:21:13.879 "trsvcid": "4420" 00:21:13.879 }, 00:21:13.879 "peer_address": { 00:21:13.879 "trtype": "TCP", 00:21:13.879 "adrfam": "IPv4", 00:21:13.879 "traddr": "10.0.0.1", 00:21:13.879 "trsvcid": "52538" 00:21:13.879 }, 00:21:13.879 "auth": { 00:21:13.879 "state": "completed", 00:21:13.879 "digest": "sha384", 00:21:13.879 "dhgroup": "ffdhe8192" 00:21:13.879 } 00:21:13.879 } 00:21:13.879 ]' 00:21:13.879 01:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.879 01:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.879 01:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.879 01:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:13.879 01:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.879 01:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.879 01:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.879 01:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.137 01:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjNlOWNhZjFmZmQ4MThkZmZkOTZjZWM4OTk5MjM4OTbGe/9N: --dhchap-ctrl-secret DHHC-1:02:ODUxMTBiNjYyOTZhODRiMThkODA0NWY0NTNjNTliOTU3NWRmY2E0MGRlMmJiZTRl8qaI/g==: 00:21:15.071 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.071 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.071 01:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.071 01:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.071 01:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.071 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.071 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:15.071 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:15.329 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:15.329 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.329 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:15.329 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:15.329 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:15.329 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.329 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.329 01:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.329 01:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.329 01:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.330 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.330 01:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.262 00:21:16.262 01:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.262 01:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.262 01:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.520 01:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.520 01:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.520 01:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.520 01:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.520 01:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.520 01:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.520 { 00:21:16.520 "cntlid": 93, 00:21:16.520 "qid": 0, 00:21:16.520 "state": "enabled", 00:21:16.520 "thread": "nvmf_tgt_poll_group_000", 00:21:16.520 "listen_address": { 00:21:16.520 "trtype": "TCP", 00:21:16.520 "adrfam": "IPv4", 00:21:16.520 "traddr": "10.0.0.2", 00:21:16.520 "trsvcid": "4420" 00:21:16.520 }, 00:21:16.520 "peer_address": { 00:21:16.520 "trtype": "TCP", 00:21:16.520 "adrfam": "IPv4", 00:21:16.520 "traddr": "10.0.0.1", 00:21:16.520 "trsvcid": "52578" 00:21:16.520 }, 00:21:16.520 "auth": { 00:21:16.520 "state": "completed", 00:21:16.520 "digest": "sha384", 00:21:16.520 "dhgroup": "ffdhe8192" 00:21:16.520 } 00:21:16.520 } 00:21:16.520 ]' 00:21:16.520 01:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.778 01:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.778 01:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.778 01:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:16.778 01:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.778 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.778 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.778 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.035 01:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTRiMDQ0MTc4MWZiZDBhMDdlY2ViZDI2M2ZlMmNlY2IyNDY4MDY4MjY2Njk2MzBl5eIxqg==: --dhchap-ctrl-secret DHHC-1:01:ZGZkNDQyZjFmZmZjYWYxZTFkZDZhMTlhNTM1MTc0MTcXN4iq: 00:21:17.967 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.967 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.967 01:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.967 01:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.967 01:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.967 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.967 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:17.967 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:18.224 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:18.224 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.224 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:18.224 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:18.224 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:18.224 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.224 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:18.224 01:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.224 01:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.224 01:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.224 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:18.224 01:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:19.156 00:21:19.156 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.156 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.156 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.413 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.413 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.413 01:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.413 01:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.413 01:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.413 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.413 { 00:21:19.413 "cntlid": 95, 00:21:19.413 "qid": 0, 00:21:19.413 "state": "enabled", 00:21:19.413 "thread": "nvmf_tgt_poll_group_000", 00:21:19.413 "listen_address": { 00:21:19.413 "trtype": "TCP", 00:21:19.413 "adrfam": "IPv4", 00:21:19.413 "traddr": "10.0.0.2", 00:21:19.413 "trsvcid": "4420" 00:21:19.413 }, 00:21:19.413 "peer_address": { 00:21:19.413 "trtype": "TCP", 00:21:19.413 "adrfam": "IPv4", 00:21:19.413 "traddr": "10.0.0.1", 00:21:19.413 "trsvcid": "46520" 00:21:19.413 }, 00:21:19.413 "auth": { 00:21:19.413 "state": "completed", 00:21:19.413 "digest": "sha384", 00:21:19.413 "dhgroup": "ffdhe8192" 00:21:19.413 } 00:21:19.413 } 00:21:19.413 ]' 00:21:19.413 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.413 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.413 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.413 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.413 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.413 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.413 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.413 01:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.671 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:21:20.602 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.602 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.602 01:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.602 01:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.602 01:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.602 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:20.602 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:20.602 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.602 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:20.602 01:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:20.859 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:20.859 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.116 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:21.116 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:21.116 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:21.116 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.116 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.116 01:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.116 01:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.116 01:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.116 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.116 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.373 00:21:21.373 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:21.373 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:21.373 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.631 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.631 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.631 01:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.631 01:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.631 01:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.631 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.631 { 00:21:21.631 "cntlid": 97, 00:21:21.631 "qid": 0, 00:21:21.631 "state": "enabled", 00:21:21.631 "thread": "nvmf_tgt_poll_group_000", 00:21:21.631 "listen_address": { 00:21:21.631 "trtype": "TCP", 00:21:21.631 "adrfam": "IPv4", 00:21:21.631 "traddr": "10.0.0.2", 00:21:21.631 "trsvcid": "4420" 00:21:21.631 }, 00:21:21.631 "peer_address": { 00:21:21.631 "trtype": "TCP", 00:21:21.631 "adrfam": "IPv4", 00:21:21.631 "traddr": "10.0.0.1", 00:21:21.631 "trsvcid": "46552" 00:21:21.631 }, 00:21:21.631 "auth": { 00:21:21.631 "state": "completed", 00:21:21.631 "digest": "sha512", 00:21:21.631 "dhgroup": "null" 00:21:21.631 } 00:21:21.631 } 00:21:21.631 ]' 00:21:21.631 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.631 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.631 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.631 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:21.631 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.631 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.631 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.631 01:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.896 01:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:21:22.827 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.827 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.827 01:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.827 01:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.827 01:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.827 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.827 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:22.827 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:23.084 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:23.084 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.084 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.085 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:23.085 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:23.085 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.085 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.085 01:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.085 01:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.085 01:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.085 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.085 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.341 00:21:23.341 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.341 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.341 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.598 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.598 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.598 01:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.598 01:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.598 01:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.598 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.598 { 00:21:23.598 "cntlid": 99, 00:21:23.598 "qid": 0, 00:21:23.598 "state": "enabled", 00:21:23.598 "thread": "nvmf_tgt_poll_group_000", 00:21:23.598 "listen_address": { 00:21:23.598 "trtype": "TCP", 00:21:23.598 "adrfam": "IPv4", 00:21:23.598 "traddr": "10.0.0.2", 00:21:23.598 "trsvcid": "4420" 00:21:23.598 }, 00:21:23.598 "peer_address": { 00:21:23.598 "trtype": "TCP", 00:21:23.598 "adrfam": "IPv4", 00:21:23.598 "traddr": "10.0.0.1", 00:21:23.598 "trsvcid": "46586" 00:21:23.598 }, 00:21:23.598 "auth": { 00:21:23.598 "state": "completed", 00:21:23.598 "digest": "sha512", 00:21:23.598 "dhgroup": "null" 00:21:23.598 } 00:21:23.598 } 00:21:23.598 ]' 00:21:23.598 01:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.855 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.855 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.855 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:23.855 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.855 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.855 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.855 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.111 01:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjNlOWNhZjFmZmQ4MThkZmZkOTZjZWM4OTk5MjM4OTbGe/9N: --dhchap-ctrl-secret DHHC-1:02:ODUxMTBiNjYyOTZhODRiMThkODA0NWY0NTNjNTliOTU3NWRmY2E0MGRlMmJiZTRl8qaI/g==: 00:21:25.040 01:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.040 01:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.040 01:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.040 01:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.040 01:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.040 01:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.040 01:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:25.040 01:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:25.297 01:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:25.297 01:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.297 01:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:25.297 01:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:25.297 01:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:25.297 01:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.297 01:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.297 01:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.297 01:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.297 01:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.297 01:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.297 01:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.553 00:21:25.553 01:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:25.553 01:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.554 01:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:25.811 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.811 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.811 01:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.811 01:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.811 01:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.811 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:25.811 { 00:21:25.811 "cntlid": 101, 00:21:25.811 "qid": 0, 00:21:25.811 "state": "enabled", 00:21:25.811 "thread": "nvmf_tgt_poll_group_000", 00:21:25.811 "listen_address": { 00:21:25.811 "trtype": "TCP", 00:21:25.811 "adrfam": "IPv4", 00:21:25.811 "traddr": "10.0.0.2", 00:21:25.811 "trsvcid": "4420" 00:21:25.811 }, 00:21:25.811 "peer_address": { 00:21:25.811 "trtype": "TCP", 00:21:25.811 "adrfam": "IPv4", 00:21:25.811 "traddr": "10.0.0.1", 00:21:25.811 "trsvcid": "46608" 00:21:25.811 }, 00:21:25.811 "auth": { 00:21:25.811 "state": "completed", 00:21:25.811 "digest": "sha512", 00:21:25.811 "dhgroup": "null" 00:21:25.811 } 00:21:25.811 } 00:21:25.811 ]' 00:21:25.811 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:25.811 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.811 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.068 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:26.068 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.068 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.068 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.068 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.326 01:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTRiMDQ0MTc4MWZiZDBhMDdlY2ViZDI2M2ZlMmNlY2IyNDY4MDY4MjY2Njk2MzBl5eIxqg==: --dhchap-ctrl-secret DHHC-1:01:ZGZkNDQyZjFmZmZjYWYxZTFkZDZhMTlhNTM1MTc0MTcXN4iq: 00:21:27.257 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.257 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.257 01:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.257 01:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.257 01:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.257 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:27.257 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:27.257 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:27.515 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:27.515 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:27.515 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:27.515 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:27.515 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:27.515 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.515 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:27.515 01:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.515 01:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.515 01:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.515 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:27.515 01:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:27.772 00:21:27.772 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.772 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.772 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.030 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.030 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.030 01:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.030 01:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.030 01:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.030 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:28.030 { 00:21:28.030 "cntlid": 103, 00:21:28.030 "qid": 0, 00:21:28.030 "state": "enabled", 00:21:28.030 "thread": "nvmf_tgt_poll_group_000", 00:21:28.030 "listen_address": { 00:21:28.030 "trtype": "TCP", 00:21:28.030 "adrfam": "IPv4", 00:21:28.030 "traddr": "10.0.0.2", 00:21:28.030 "trsvcid": "4420" 00:21:28.030 }, 00:21:28.030 "peer_address": { 00:21:28.030 "trtype": "TCP", 00:21:28.030 "adrfam": "IPv4", 00:21:28.030 "traddr": "10.0.0.1", 00:21:28.030 "trsvcid": "46630" 00:21:28.030 }, 00:21:28.030 "auth": { 00:21:28.030 "state": "completed", 00:21:28.030 "digest": "sha512", 00:21:28.030 "dhgroup": "null" 00:21:28.030 } 00:21:28.030 } 00:21:28.030 ]' 00:21:28.030 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:28.030 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.030 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:28.286 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:28.286 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:28.286 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.286 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.286 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.544 01:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:21:29.479 01:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.479 01:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.479 01:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.479 01:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.479 01:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.479 01:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:29.479 01:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:29.479 01:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.479 01:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.737 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:29.737 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:29.737 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:29.737 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:29.737 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:29.737 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.737 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.737 01:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.737 01:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.737 01:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.737 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.737 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.995 00:21:29.995 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.995 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.995 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.253 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.253 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.253 01:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.253 01:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.253 01:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.253 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:30.253 { 00:21:30.253 "cntlid": 105, 00:21:30.253 "qid": 0, 00:21:30.253 "state": "enabled", 00:21:30.253 "thread": "nvmf_tgt_poll_group_000", 00:21:30.253 "listen_address": { 00:21:30.253 "trtype": "TCP", 00:21:30.253 "adrfam": "IPv4", 00:21:30.253 "traddr": "10.0.0.2", 00:21:30.253 "trsvcid": "4420" 00:21:30.253 }, 00:21:30.253 "peer_address": { 00:21:30.253 "trtype": "TCP", 00:21:30.253 "adrfam": "IPv4", 00:21:30.253 "traddr": "10.0.0.1", 00:21:30.253 "trsvcid": "46390" 00:21:30.253 }, 00:21:30.253 "auth": { 00:21:30.253 "state": "completed", 00:21:30.253 "digest": "sha512", 00:21:30.253 "dhgroup": "ffdhe2048" 00:21:30.253 } 00:21:30.253 } 00:21:30.253 ]' 00:21:30.253 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:30.511 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.511 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:30.511 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:30.511 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:30.511 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.511 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.511 01:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.769 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:21:31.703 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.703 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.703 01:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.703 01:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.703 01:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.703 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:31.703 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:31.703 01:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:31.961 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:31.961 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.961 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.961 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:31.961 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:31.961 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.961 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.961 01:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.961 01:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.961 01:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.962 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.962 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.219 00:21:32.219 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:32.219 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:32.219 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.477 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.477 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.477 01:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.477 01:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.477 01:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.477 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.477 { 00:21:32.477 "cntlid": 107, 00:21:32.477 "qid": 0, 00:21:32.477 "state": "enabled", 00:21:32.477 "thread": "nvmf_tgt_poll_group_000", 00:21:32.477 "listen_address": { 00:21:32.477 "trtype": "TCP", 00:21:32.477 "adrfam": "IPv4", 00:21:32.477 "traddr": "10.0.0.2", 00:21:32.477 "trsvcid": "4420" 00:21:32.477 }, 00:21:32.477 "peer_address": { 00:21:32.477 "trtype": "TCP", 00:21:32.477 "adrfam": "IPv4", 00:21:32.477 "traddr": "10.0.0.1", 00:21:32.477 "trsvcid": "46414" 00:21:32.477 }, 00:21:32.477 "auth": { 00:21:32.477 "state": "completed", 00:21:32.477 "digest": "sha512", 00:21:32.477 "dhgroup": "ffdhe2048" 00:21:32.477 } 00:21:32.477 } 00:21:32.477 ]' 00:21:32.477 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.477 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.477 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.735 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:32.735 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.735 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.735 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.735 01:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.993 01:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjNlOWNhZjFmZmQ4MThkZmZkOTZjZWM4OTk5MjM4OTbGe/9N: --dhchap-ctrl-secret DHHC-1:02:ODUxMTBiNjYyOTZhODRiMThkODA0NWY0NTNjNTliOTU3NWRmY2E0MGRlMmJiZTRl8qaI/g==: 00:21:33.926 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.926 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.926 01:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.926 01:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.926 01:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.926 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.926 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:33.926 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:34.186 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:34.186 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:34.186 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:34.186 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:34.186 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:34.186 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.186 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.186 01:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.186 01:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.186 01:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.186 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.186 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.443 00:21:34.443 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.443 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.443 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.701 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.701 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.701 01:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.701 01:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.701 01:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.701 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.701 { 00:21:34.701 "cntlid": 109, 00:21:34.701 "qid": 0, 00:21:34.701 "state": "enabled", 00:21:34.701 "thread": "nvmf_tgt_poll_group_000", 00:21:34.701 "listen_address": { 00:21:34.701 "trtype": "TCP", 00:21:34.701 "adrfam": "IPv4", 00:21:34.701 "traddr": "10.0.0.2", 00:21:34.701 "trsvcid": "4420" 00:21:34.701 }, 00:21:34.701 "peer_address": { 00:21:34.701 "trtype": "TCP", 00:21:34.701 "adrfam": "IPv4", 00:21:34.701 "traddr": "10.0.0.1", 00:21:34.701 "trsvcid": "46448" 00:21:34.701 }, 00:21:34.701 "auth": { 00:21:34.701 "state": "completed", 00:21:34.701 "digest": "sha512", 00:21:34.701 "dhgroup": "ffdhe2048" 00:21:34.701 } 00:21:34.701 } 00:21:34.701 ]' 00:21:34.701 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.701 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.701 01:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.701 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:34.701 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.701 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.701 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.701 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.958 01:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTRiMDQ0MTc4MWZiZDBhMDdlY2ViZDI2M2ZlMmNlY2IyNDY4MDY4MjY2Njk2MzBl5eIxqg==: --dhchap-ctrl-secret DHHC-1:01:ZGZkNDQyZjFmZmZjYWYxZTFkZDZhMTlhNTM1MTc0MTcXN4iq: 00:21:35.891 01:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.891 01:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.891 01:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.891 01:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.891 01:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.891 01:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.891 01:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:35.891 01:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:36.155 01:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:36.155 01:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.155 01:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:36.155 01:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:36.155 01:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:36.155 01:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.155 01:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:36.155 01:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.155 01:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.155 01:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.155 01:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.155 01:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.464 00:21:36.464 01:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.464 01:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.464 01:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.722 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.722 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.722 01:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.722 01:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.722 01:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.722 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.722 { 00:21:36.722 "cntlid": 111, 00:21:36.722 "qid": 0, 00:21:36.722 "state": "enabled", 00:21:36.722 "thread": "nvmf_tgt_poll_group_000", 00:21:36.722 "listen_address": { 00:21:36.722 "trtype": "TCP", 00:21:36.722 "adrfam": "IPv4", 00:21:36.722 "traddr": "10.0.0.2", 00:21:36.722 "trsvcid": "4420" 00:21:36.722 }, 00:21:36.722 "peer_address": { 00:21:36.722 "trtype": "TCP", 00:21:36.722 "adrfam": "IPv4", 00:21:36.722 "traddr": "10.0.0.1", 00:21:36.722 "trsvcid": "46470" 00:21:36.722 }, 00:21:36.722 "auth": { 00:21:36.722 "state": "completed", 00:21:36.722 "digest": "sha512", 00:21:36.722 "dhgroup": "ffdhe2048" 00:21:36.722 } 00:21:36.722 } 00:21:36.722 ]' 00:21:36.722 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.980 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.980 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.980 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:36.980 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.980 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.980 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.980 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.238 01:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:21:38.176 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.176 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.176 01:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.176 01:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.176 01:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.176 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:38.176 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.176 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:38.176 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:38.434 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:38.434 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.434 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.434 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:38.434 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:38.434 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.434 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.434 01:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.434 01:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.434 01:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.434 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.434 01:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.001 00:21:39.001 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.001 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.001 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.001 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.001 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.001 01:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.001 01:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.001 01:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.001 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.001 { 00:21:39.001 "cntlid": 113, 00:21:39.001 "qid": 0, 00:21:39.001 "state": "enabled", 00:21:39.001 "thread": "nvmf_tgt_poll_group_000", 00:21:39.001 "listen_address": { 00:21:39.001 "trtype": "TCP", 00:21:39.001 "adrfam": "IPv4", 00:21:39.001 "traddr": "10.0.0.2", 00:21:39.001 "trsvcid": "4420" 00:21:39.001 }, 00:21:39.001 "peer_address": { 00:21:39.001 "trtype": "TCP", 00:21:39.001 "adrfam": "IPv4", 00:21:39.001 "traddr": "10.0.0.1", 00:21:39.001 "trsvcid": "59910" 00:21:39.001 }, 00:21:39.001 "auth": { 00:21:39.001 "state": "completed", 00:21:39.001 "digest": "sha512", 00:21:39.001 "dhgroup": "ffdhe3072" 00:21:39.001 } 00:21:39.001 } 00:21:39.001 ]' 00:21:39.001 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.259 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.259 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.260 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:39.260 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.260 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.260 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.260 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.518 01:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:21:40.454 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.454 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.454 01:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.454 01:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.454 01:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.454 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.454 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:40.454 01:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:40.713 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:40.713 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.713 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:40.713 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:40.713 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:40.713 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.713 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.713 01:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.713 01:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.713 01:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.713 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.713 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.279 00:21:41.279 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.279 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.279 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.536 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.536 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.536 01:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.536 01:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.536 01:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.536 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.536 { 00:21:41.536 "cntlid": 115, 00:21:41.536 "qid": 0, 00:21:41.536 "state": "enabled", 00:21:41.536 "thread": "nvmf_tgt_poll_group_000", 00:21:41.536 "listen_address": { 00:21:41.536 "trtype": "TCP", 00:21:41.536 "adrfam": "IPv4", 00:21:41.536 "traddr": "10.0.0.2", 00:21:41.536 "trsvcid": "4420" 00:21:41.536 }, 00:21:41.536 "peer_address": { 00:21:41.536 "trtype": "TCP", 00:21:41.536 "adrfam": "IPv4", 00:21:41.536 "traddr": "10.0.0.1", 00:21:41.536 "trsvcid": "59942" 00:21:41.536 }, 00:21:41.536 "auth": { 00:21:41.536 "state": "completed", 00:21:41.536 "digest": "sha512", 00:21:41.536 "dhgroup": "ffdhe3072" 00:21:41.536 } 00:21:41.536 } 00:21:41.536 ]' 00:21:41.536 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.536 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.536 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.536 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:41.536 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.536 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.536 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.536 01:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.793 01:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjNlOWNhZjFmZmQ4MThkZmZkOTZjZWM4OTk5MjM4OTbGe/9N: --dhchap-ctrl-secret DHHC-1:02:ODUxMTBiNjYyOTZhODRiMThkODA0NWY0NTNjNTliOTU3NWRmY2E0MGRlMmJiZTRl8qaI/g==: 00:21:42.724 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.724 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.724 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.724 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.724 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.724 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.724 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:42.724 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:42.982 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:42.982 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.982 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.982 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:42.982 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:42.982 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.982 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.982 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.982 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.982 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.982 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.982 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.239 00:21:43.239 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.239 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.239 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.497 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.497 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.497 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.497 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.754 01:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.754 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.754 { 00:21:43.754 "cntlid": 117, 00:21:43.754 "qid": 0, 00:21:43.754 "state": "enabled", 00:21:43.754 "thread": "nvmf_tgt_poll_group_000", 00:21:43.754 "listen_address": { 00:21:43.754 "trtype": "TCP", 00:21:43.754 "adrfam": "IPv4", 00:21:43.754 "traddr": "10.0.0.2", 00:21:43.754 "trsvcid": "4420" 00:21:43.754 }, 00:21:43.754 "peer_address": { 00:21:43.754 "trtype": "TCP", 00:21:43.754 "adrfam": "IPv4", 00:21:43.754 "traddr": "10.0.0.1", 00:21:43.754 "trsvcid": "59970" 00:21:43.754 }, 00:21:43.754 "auth": { 00:21:43.754 "state": "completed", 00:21:43.754 "digest": "sha512", 00:21:43.754 "dhgroup": "ffdhe3072" 00:21:43.754 } 00:21:43.754 } 00:21:43.754 ]' 00:21:43.754 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.754 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.754 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.754 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:43.754 01:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.754 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.754 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.754 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.013 01:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTRiMDQ0MTc4MWZiZDBhMDdlY2ViZDI2M2ZlMmNlY2IyNDY4MDY4MjY2Njk2MzBl5eIxqg==: --dhchap-ctrl-secret DHHC-1:01:ZGZkNDQyZjFmZmZjYWYxZTFkZDZhMTlhNTM1MTc0MTcXN4iq: 00:21:44.943 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.943 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.943 01:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.943 01:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.943 01:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.943 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.943 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:44.943 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:45.200 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:45.200 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:45.200 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:45.200 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:45.200 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:45.200 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.200 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:45.200 01:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.200 01:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.200 01:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.200 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:45.200 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:45.457 00:21:45.457 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.457 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.457 01:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.715 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.715 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.715 01:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.715 01:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.715 01:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.715 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.715 { 00:21:45.715 "cntlid": 119, 00:21:45.715 "qid": 0, 00:21:45.715 "state": "enabled", 00:21:45.715 "thread": "nvmf_tgt_poll_group_000", 00:21:45.715 "listen_address": { 00:21:45.715 "trtype": "TCP", 00:21:45.715 "adrfam": "IPv4", 00:21:45.715 "traddr": "10.0.0.2", 00:21:45.715 "trsvcid": "4420" 00:21:45.715 }, 00:21:45.715 "peer_address": { 00:21:45.715 "trtype": "TCP", 00:21:45.715 "adrfam": "IPv4", 00:21:45.715 "traddr": "10.0.0.1", 00:21:45.715 "trsvcid": "59994" 00:21:45.715 }, 00:21:45.715 "auth": { 00:21:45.715 "state": "completed", 00:21:45.715 "digest": "sha512", 00:21:45.715 "dhgroup": "ffdhe3072" 00:21:45.715 } 00:21:45.715 } 00:21:45.715 ]' 00:21:45.715 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.972 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.972 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.972 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:45.972 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.972 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.972 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.972 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.229 01:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:21:47.162 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.162 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.162 01:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.162 01:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.162 01:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.162 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:47.162 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:47.162 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:47.162 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:47.419 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:47.419 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.419 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:47.419 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:47.419 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:47.419 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.419 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.419 01:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.419 01:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.419 01:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.420 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.420 01:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.678 00:21:47.678 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.678 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.678 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.936 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.936 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.936 01:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.936 01:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.936 01:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.936 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.936 { 00:21:47.936 "cntlid": 121, 00:21:47.936 "qid": 0, 00:21:47.936 "state": "enabled", 00:21:47.936 "thread": "nvmf_tgt_poll_group_000", 00:21:47.936 "listen_address": { 00:21:47.936 "trtype": "TCP", 00:21:47.936 "adrfam": "IPv4", 00:21:47.936 "traddr": "10.0.0.2", 00:21:47.936 "trsvcid": "4420" 00:21:47.936 }, 00:21:47.936 "peer_address": { 00:21:47.936 "trtype": "TCP", 00:21:47.936 "adrfam": "IPv4", 00:21:47.936 "traddr": "10.0.0.1", 00:21:47.936 "trsvcid": "60018" 00:21:47.936 }, 00:21:47.936 "auth": { 00:21:47.936 "state": "completed", 00:21:47.936 "digest": "sha512", 00:21:47.936 "dhgroup": "ffdhe4096" 00:21:47.936 } 00:21:47.936 } 00:21:47.936 ]' 00:21:47.936 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.194 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.194 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.194 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:48.194 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.194 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.194 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.194 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.451 01:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:21:49.384 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.384 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.384 01:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.384 01:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.384 01:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.384 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.384 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:49.384 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:49.641 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:49.641 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.641 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:49.641 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:49.641 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:49.641 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.641 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.641 01:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.641 01:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.641 01:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.641 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.641 01:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.899 00:21:49.899 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.899 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.899 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.156 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.156 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.156 01:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.156 01:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.156 01:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.156 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.156 { 00:21:50.156 "cntlid": 123, 00:21:50.156 "qid": 0, 00:21:50.156 "state": "enabled", 00:21:50.156 "thread": "nvmf_tgt_poll_group_000", 00:21:50.156 "listen_address": { 00:21:50.156 "trtype": "TCP", 00:21:50.156 "adrfam": "IPv4", 00:21:50.156 "traddr": "10.0.0.2", 00:21:50.156 "trsvcid": "4420" 00:21:50.156 }, 00:21:50.156 "peer_address": { 00:21:50.156 "trtype": "TCP", 00:21:50.156 "adrfam": "IPv4", 00:21:50.156 "traddr": "10.0.0.1", 00:21:50.156 "trsvcid": "50558" 00:21:50.156 }, 00:21:50.156 "auth": { 00:21:50.156 "state": "completed", 00:21:50.156 "digest": "sha512", 00:21:50.156 "dhgroup": "ffdhe4096" 00:21:50.156 } 00:21:50.156 } 00:21:50.156 ]' 00:21:50.156 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.414 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.414 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.414 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:50.414 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.414 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.414 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.414 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.672 01:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjNlOWNhZjFmZmQ4MThkZmZkOTZjZWM4OTk5MjM4OTbGe/9N: --dhchap-ctrl-secret DHHC-1:02:ODUxMTBiNjYyOTZhODRiMThkODA0NWY0NTNjNTliOTU3NWRmY2E0MGRlMmJiZTRl8qaI/g==: 00:21:51.625 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.625 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.625 01:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.625 01:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.625 01:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.625 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.625 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:51.625 01:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:51.883 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:51.883 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.883 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:51.883 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:51.883 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:51.883 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.883 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.883 01:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.883 01:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.883 01:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.883 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.883 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.141 00:21:52.399 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.399 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.399 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.656 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.657 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.657 01:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.657 01:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.657 01:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.657 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.657 { 00:21:52.657 "cntlid": 125, 00:21:52.657 "qid": 0, 00:21:52.657 "state": "enabled", 00:21:52.657 "thread": "nvmf_tgt_poll_group_000", 00:21:52.657 "listen_address": { 00:21:52.657 "trtype": "TCP", 00:21:52.657 "adrfam": "IPv4", 00:21:52.657 "traddr": "10.0.0.2", 00:21:52.657 "trsvcid": "4420" 00:21:52.657 }, 00:21:52.657 "peer_address": { 00:21:52.657 "trtype": "TCP", 00:21:52.657 "adrfam": "IPv4", 00:21:52.657 "traddr": "10.0.0.1", 00:21:52.657 "trsvcid": "50580" 00:21:52.657 }, 00:21:52.657 "auth": { 00:21:52.657 "state": "completed", 00:21:52.657 "digest": "sha512", 00:21:52.657 "dhgroup": "ffdhe4096" 00:21:52.657 } 00:21:52.657 } 00:21:52.657 ]' 00:21:52.657 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.657 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.657 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.657 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:52.657 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.657 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.657 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.657 01:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.915 01:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTRiMDQ0MTc4MWZiZDBhMDdlY2ViZDI2M2ZlMmNlY2IyNDY4MDY4MjY2Njk2MzBl5eIxqg==: --dhchap-ctrl-secret DHHC-1:01:ZGZkNDQyZjFmZmZjYWYxZTFkZDZhMTlhNTM1MTc0MTcXN4iq: 00:21:53.848 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.848 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.848 01:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.848 01:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.848 01:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.848 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.848 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:53.848 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:54.106 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:54.106 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.106 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.106 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:54.106 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:54.106 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.106 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:54.106 01:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.106 01:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.106 01:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.106 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:54.106 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:54.363 00:21:54.621 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.621 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.621 01:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.879 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.879 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.879 01:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.879 01:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.879 01:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.879 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.879 { 00:21:54.879 "cntlid": 127, 00:21:54.879 "qid": 0, 00:21:54.879 "state": "enabled", 00:21:54.879 "thread": "nvmf_tgt_poll_group_000", 00:21:54.879 "listen_address": { 00:21:54.879 "trtype": "TCP", 00:21:54.879 "adrfam": "IPv4", 00:21:54.879 "traddr": "10.0.0.2", 00:21:54.879 "trsvcid": "4420" 00:21:54.879 }, 00:21:54.879 "peer_address": { 00:21:54.879 "trtype": "TCP", 00:21:54.879 "adrfam": "IPv4", 00:21:54.879 "traddr": "10.0.0.1", 00:21:54.879 "trsvcid": "50592" 00:21:54.879 }, 00:21:54.879 "auth": { 00:21:54.879 "state": "completed", 00:21:54.879 "digest": "sha512", 00:21:54.879 "dhgroup": "ffdhe4096" 00:21:54.879 } 00:21:54.879 } 00:21:54.879 ]' 00:21:54.879 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.879 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.879 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.879 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:54.879 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.879 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.879 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.879 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.137 01:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:21:56.071 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.071 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.071 01:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.071 01:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.071 01:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.071 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.071 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.071 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:56.071 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:56.329 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:56.329 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.329 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:56.329 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:56.329 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:56.329 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.329 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.329 01:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.329 01:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.329 01:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.329 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.330 01:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.896 00:21:56.896 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.896 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.896 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.154 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.154 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.154 01:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.154 01:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.154 01:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.154 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.154 { 00:21:57.154 "cntlid": 129, 00:21:57.154 "qid": 0, 00:21:57.154 "state": "enabled", 00:21:57.154 "thread": "nvmf_tgt_poll_group_000", 00:21:57.154 "listen_address": { 00:21:57.154 "trtype": "TCP", 00:21:57.154 "adrfam": "IPv4", 00:21:57.154 "traddr": "10.0.0.2", 00:21:57.154 "trsvcid": "4420" 00:21:57.154 }, 00:21:57.154 "peer_address": { 00:21:57.154 "trtype": "TCP", 00:21:57.154 "adrfam": "IPv4", 00:21:57.154 "traddr": "10.0.0.1", 00:21:57.154 "trsvcid": "50628" 00:21:57.154 }, 00:21:57.154 "auth": { 00:21:57.154 "state": "completed", 00:21:57.154 "digest": "sha512", 00:21:57.154 "dhgroup": "ffdhe6144" 00:21:57.154 } 00:21:57.154 } 00:21:57.154 ]' 00:21:57.154 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.154 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.154 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.154 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:57.154 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.154 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.154 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.154 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.413 01:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:21:58.346 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.604 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.604 01:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.604 01:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.604 01:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.604 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.604 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:58.604 01:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:58.861 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:58.861 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.861 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.861 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:58.861 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:58.861 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.861 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.861 01:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.861 01:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.861 01:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.861 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.861 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.424 00:21:59.424 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.425 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.425 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.681 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.681 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.681 01:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.681 01:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.681 01:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.681 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.681 { 00:21:59.681 "cntlid": 131, 00:21:59.681 "qid": 0, 00:21:59.681 "state": "enabled", 00:21:59.681 "thread": "nvmf_tgt_poll_group_000", 00:21:59.681 "listen_address": { 00:21:59.681 "trtype": "TCP", 00:21:59.681 "adrfam": "IPv4", 00:21:59.681 "traddr": "10.0.0.2", 00:21:59.681 "trsvcid": "4420" 00:21:59.681 }, 00:21:59.681 "peer_address": { 00:21:59.681 "trtype": "TCP", 00:21:59.681 "adrfam": "IPv4", 00:21:59.681 "traddr": "10.0.0.1", 00:21:59.681 "trsvcid": "37712" 00:21:59.681 }, 00:21:59.681 "auth": { 00:21:59.681 "state": "completed", 00:21:59.682 "digest": "sha512", 00:21:59.682 "dhgroup": "ffdhe6144" 00:21:59.682 } 00:21:59.682 } 00:21:59.682 ]' 00:21:59.682 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.682 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.682 01:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.682 01:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:59.682 01:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.682 01:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.682 01:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.682 01:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.939 01:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjNlOWNhZjFmZmQ4MThkZmZkOTZjZWM4OTk5MjM4OTbGe/9N: --dhchap-ctrl-secret DHHC-1:02:ODUxMTBiNjYyOTZhODRiMThkODA0NWY0NTNjNTliOTU3NWRmY2E0MGRlMmJiZTRl8qaI/g==: 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.310 01:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.873 00:22:01.873 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.873 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.873 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.130 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.130 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.130 01:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.130 01:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.130 01:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.130 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.130 { 00:22:02.130 "cntlid": 133, 00:22:02.130 "qid": 0, 00:22:02.130 "state": "enabled", 00:22:02.130 "thread": "nvmf_tgt_poll_group_000", 00:22:02.130 "listen_address": { 00:22:02.130 "trtype": "TCP", 00:22:02.130 "adrfam": "IPv4", 00:22:02.130 "traddr": "10.0.0.2", 00:22:02.130 "trsvcid": "4420" 00:22:02.130 }, 00:22:02.130 "peer_address": { 00:22:02.130 "trtype": "TCP", 00:22:02.130 "adrfam": "IPv4", 00:22:02.130 "traddr": "10.0.0.1", 00:22:02.130 "trsvcid": "37742" 00:22:02.130 }, 00:22:02.130 "auth": { 00:22:02.130 "state": "completed", 00:22:02.130 "digest": "sha512", 00:22:02.130 "dhgroup": "ffdhe6144" 00:22:02.130 } 00:22:02.130 } 00:22:02.130 ]' 00:22:02.130 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.130 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.130 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.130 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:02.130 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.387 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.387 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.387 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.644 01:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTRiMDQ0MTc4MWZiZDBhMDdlY2ViZDI2M2ZlMmNlY2IyNDY4MDY4MjY2Njk2MzBl5eIxqg==: --dhchap-ctrl-secret DHHC-1:01:ZGZkNDQyZjFmZmZjYWYxZTFkZDZhMTlhNTM1MTc0MTcXN4iq: 00:22:03.578 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.578 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.578 01:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.578 01:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.578 01:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.578 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:03.578 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.578 01:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:03.836 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:03.836 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.836 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:03.836 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:03.836 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:03.836 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.836 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:03.836 01:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.836 01:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.836 01:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.836 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:03.836 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:04.402 00:22:04.402 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:04.402 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:04.402 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.659 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.659 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.659 01:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.659 01:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.659 01:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.659 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:04.659 { 00:22:04.659 "cntlid": 135, 00:22:04.659 "qid": 0, 00:22:04.659 "state": "enabled", 00:22:04.659 "thread": "nvmf_tgt_poll_group_000", 00:22:04.659 "listen_address": { 00:22:04.659 "trtype": "TCP", 00:22:04.659 "adrfam": "IPv4", 00:22:04.659 "traddr": "10.0.0.2", 00:22:04.659 "trsvcid": "4420" 00:22:04.659 }, 00:22:04.659 "peer_address": { 00:22:04.659 "trtype": "TCP", 00:22:04.659 "adrfam": "IPv4", 00:22:04.659 "traddr": "10.0.0.1", 00:22:04.659 "trsvcid": "37774" 00:22:04.659 }, 00:22:04.659 "auth": { 00:22:04.659 "state": "completed", 00:22:04.659 "digest": "sha512", 00:22:04.659 "dhgroup": "ffdhe6144" 00:22:04.659 } 00:22:04.659 } 00:22:04.659 ]' 00:22:04.659 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:04.659 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.659 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:04.659 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:04.659 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:04.659 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.659 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.659 01:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.917 01:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:22:05.850 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.850 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.850 01:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.850 01:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.850 01:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.850 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:05.850 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:05.850 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:05.850 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:06.107 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:06.107 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:06.107 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:06.107 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:06.107 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:06.107 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.107 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.107 01:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.107 01:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.107 01:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.107 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.107 01:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.069 00:22:07.069 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:07.069 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:07.069 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.327 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.327 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.327 01:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.327 01:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.327 01:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.327 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:07.327 { 00:22:07.327 "cntlid": 137, 00:22:07.327 "qid": 0, 00:22:07.327 "state": "enabled", 00:22:07.327 "thread": "nvmf_tgt_poll_group_000", 00:22:07.327 "listen_address": { 00:22:07.327 "trtype": "TCP", 00:22:07.327 "adrfam": "IPv4", 00:22:07.327 "traddr": "10.0.0.2", 00:22:07.327 "trsvcid": "4420" 00:22:07.327 }, 00:22:07.327 "peer_address": { 00:22:07.327 "trtype": "TCP", 00:22:07.327 "adrfam": "IPv4", 00:22:07.327 "traddr": "10.0.0.1", 00:22:07.327 "trsvcid": "37806" 00:22:07.327 }, 00:22:07.327 "auth": { 00:22:07.327 "state": "completed", 00:22:07.327 "digest": "sha512", 00:22:07.327 "dhgroup": "ffdhe8192" 00:22:07.327 } 00:22:07.327 } 00:22:07.327 ]' 00:22:07.327 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:07.327 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.327 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:07.327 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:07.327 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:07.327 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.327 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.327 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.584 01:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:22:08.954 01:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.954 01:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.885 00:22:09.885 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:09.885 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:09.885 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.142 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.142 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.142 01:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.142 01:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.142 01:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.142 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:10.142 { 00:22:10.142 "cntlid": 139, 00:22:10.142 "qid": 0, 00:22:10.142 "state": "enabled", 00:22:10.142 "thread": "nvmf_tgt_poll_group_000", 00:22:10.142 "listen_address": { 00:22:10.142 "trtype": "TCP", 00:22:10.142 "adrfam": "IPv4", 00:22:10.142 "traddr": "10.0.0.2", 00:22:10.142 "trsvcid": "4420" 00:22:10.142 }, 00:22:10.142 "peer_address": { 00:22:10.142 "trtype": "TCP", 00:22:10.142 "adrfam": "IPv4", 00:22:10.142 "traddr": "10.0.0.1", 00:22:10.142 "trsvcid": "54410" 00:22:10.142 }, 00:22:10.142 "auth": { 00:22:10.142 "state": "completed", 00:22:10.142 "digest": "sha512", 00:22:10.142 "dhgroup": "ffdhe8192" 00:22:10.142 } 00:22:10.142 } 00:22:10.142 ]' 00:22:10.143 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:10.143 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.143 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.143 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:10.143 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.143 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.143 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.143 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.400 01:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:NjNlOWNhZjFmZmQ4MThkZmZkOTZjZWM4OTk5MjM4OTbGe/9N: --dhchap-ctrl-secret DHHC-1:02:ODUxMTBiNjYyOTZhODRiMThkODA0NWY0NTNjNTliOTU3NWRmY2E0MGRlMmJiZTRl8qaI/g==: 00:22:11.334 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.334 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.334 01:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.334 01:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.334 01:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.334 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:11.334 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:11.334 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:11.592 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:11.592 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:11.592 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:11.592 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:11.592 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:11.592 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.592 01:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.592 01:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.592 01:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.592 01:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.592 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.592 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.524 00:22:12.524 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:12.524 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:12.524 01:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.782 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.782 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.782 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.782 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.782 01:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.782 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:12.782 { 00:22:12.782 "cntlid": 141, 00:22:12.782 "qid": 0, 00:22:12.782 "state": "enabled", 00:22:12.782 "thread": "nvmf_tgt_poll_group_000", 00:22:12.782 "listen_address": { 00:22:12.782 "trtype": "TCP", 00:22:12.782 "adrfam": "IPv4", 00:22:12.782 "traddr": "10.0.0.2", 00:22:12.782 "trsvcid": "4420" 00:22:12.782 }, 00:22:12.782 "peer_address": { 00:22:12.782 "trtype": "TCP", 00:22:12.782 "adrfam": "IPv4", 00:22:12.782 "traddr": "10.0.0.1", 00:22:12.782 "trsvcid": "54422" 00:22:12.782 }, 00:22:12.782 "auth": { 00:22:12.782 "state": "completed", 00:22:12.782 "digest": "sha512", 00:22:12.782 "dhgroup": "ffdhe8192" 00:22:12.782 } 00:22:12.782 } 00:22:12.782 ]' 00:22:12.782 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.782 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.039 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:13.039 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:13.039 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:13.039 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.039 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.039 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.296 01:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTRiMDQ0MTc4MWZiZDBhMDdlY2ViZDI2M2ZlMmNlY2IyNDY4MDY4MjY2Njk2MzBl5eIxqg==: --dhchap-ctrl-secret DHHC-1:01:ZGZkNDQyZjFmZmZjYWYxZTFkZDZhMTlhNTM1MTc0MTcXN4iq: 00:22:14.227 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.227 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.227 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.227 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.227 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.227 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.227 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.227 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:14.485 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:14.485 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.485 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:14.485 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:14.485 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:14.485 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.485 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:14.485 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.485 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.485 01:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.485 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.485 01:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:15.418 00:22:15.418 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:15.418 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:15.418 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.676 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.676 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.676 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.676 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.676 01:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.676 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:15.676 { 00:22:15.676 "cntlid": 143, 00:22:15.676 "qid": 0, 00:22:15.676 "state": "enabled", 00:22:15.676 "thread": "nvmf_tgt_poll_group_000", 00:22:15.676 "listen_address": { 00:22:15.676 "trtype": "TCP", 00:22:15.676 "adrfam": "IPv4", 00:22:15.676 "traddr": "10.0.0.2", 00:22:15.676 "trsvcid": "4420" 00:22:15.676 }, 00:22:15.676 "peer_address": { 00:22:15.676 "trtype": "TCP", 00:22:15.676 "adrfam": "IPv4", 00:22:15.676 "traddr": "10.0.0.1", 00:22:15.676 "trsvcid": "54446" 00:22:15.676 }, 00:22:15.676 "auth": { 00:22:15.676 "state": "completed", 00:22:15.676 "digest": "sha512", 00:22:15.676 "dhgroup": "ffdhe8192" 00:22:15.676 } 00:22:15.676 } 00:22:15.676 ]' 00:22:15.676 01:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.676 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.676 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:15.676 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:15.676 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:15.935 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.935 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.935 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.935 01:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.310 01:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.245 00:22:18.245 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.245 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.245 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.503 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.503 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.503 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.503 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.503 01:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.503 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:18.503 { 00:22:18.503 "cntlid": 145, 00:22:18.503 "qid": 0, 00:22:18.503 "state": "enabled", 00:22:18.503 "thread": "nvmf_tgt_poll_group_000", 00:22:18.503 "listen_address": { 00:22:18.503 "trtype": "TCP", 00:22:18.503 "adrfam": "IPv4", 00:22:18.503 "traddr": "10.0.0.2", 00:22:18.503 "trsvcid": "4420" 00:22:18.503 }, 00:22:18.503 "peer_address": { 00:22:18.503 "trtype": "TCP", 00:22:18.503 "adrfam": "IPv4", 00:22:18.503 "traddr": "10.0.0.1", 00:22:18.503 "trsvcid": "54484" 00:22:18.503 }, 00:22:18.503 "auth": { 00:22:18.503 "state": "completed", 00:22:18.503 "digest": "sha512", 00:22:18.503 "dhgroup": "ffdhe8192" 00:22:18.503 } 00:22:18.503 } 00:22:18.503 ]' 00:22:18.503 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:18.503 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.503 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:18.503 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:18.503 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:18.761 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.761 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.761 01:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.020 01:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YWVkN2Y1ODViMDE3YTFhMThmMTAxNTJkODQ1MTY0M2U3YmJhYjc0YmY1YmJiY2YzW+HWcw==: --dhchap-ctrl-secret DHHC-1:03:NDFhNWQ3MjU1Zjk5NDE4MTNjZmIyMmVjMWM5MGU3MDFmYjA4ZTkzZWQwNzc2OTkxZTRlNjNhNjQyMTBiYTQ4M1QSo0k=: 00:22:19.956 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.956 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.956 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.956 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.956 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.956 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:19.956 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.956 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.956 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.956 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:19.956 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:19.956 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:19.956 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:19.956 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.956 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:19.956 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.956 01:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:19.956 01:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:20.890 request: 00:22:20.890 { 00:22:20.890 "name": "nvme0", 00:22:20.890 "trtype": "tcp", 00:22:20.890 "traddr": "10.0.0.2", 00:22:20.890 "adrfam": "ipv4", 00:22:20.890 "trsvcid": "4420", 00:22:20.890 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:20.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:20.890 "prchk_reftag": false, 00:22:20.890 "prchk_guard": false, 00:22:20.890 "hdgst": false, 00:22:20.890 "ddgst": false, 00:22:20.890 "dhchap_key": "key2", 00:22:20.890 "method": "bdev_nvme_attach_controller", 00:22:20.890 "req_id": 1 00:22:20.890 } 00:22:20.890 Got JSON-RPC error response 00:22:20.890 response: 00:22:20.890 { 00:22:20.890 "code": -5, 00:22:20.890 "message": "Input/output error" 00:22:20.890 } 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:20.890 01:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:21.824 request: 00:22:21.824 { 00:22:21.824 "name": "nvme0", 00:22:21.824 "trtype": "tcp", 00:22:21.824 "traddr": "10.0.0.2", 00:22:21.824 "adrfam": "ipv4", 00:22:21.824 "trsvcid": "4420", 00:22:21.824 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:21.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:21.824 "prchk_reftag": false, 00:22:21.824 "prchk_guard": false, 00:22:21.824 "hdgst": false, 00:22:21.824 "ddgst": false, 00:22:21.824 "dhchap_key": "key1", 00:22:21.824 "dhchap_ctrlr_key": "ckey2", 00:22:21.824 "method": "bdev_nvme_attach_controller", 00:22:21.824 "req_id": 1 00:22:21.824 } 00:22:21.824 Got JSON-RPC error response 00:22:21.824 response: 00:22:21.824 { 00:22:21.824 "code": -5, 00:22:21.824 "message": "Input/output error" 00:22:21.824 } 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.824 01:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.789 request: 00:22:22.789 { 00:22:22.789 "name": "nvme0", 00:22:22.789 "trtype": "tcp", 00:22:22.789 "traddr": "10.0.0.2", 00:22:22.789 "adrfam": "ipv4", 00:22:22.789 "trsvcid": "4420", 00:22:22.789 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:22.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:22.789 "prchk_reftag": false, 00:22:22.789 "prchk_guard": false, 00:22:22.789 "hdgst": false, 00:22:22.789 "ddgst": false, 00:22:22.789 "dhchap_key": "key1", 00:22:22.789 "dhchap_ctrlr_key": "ckey1", 00:22:22.789 "method": "bdev_nvme_attach_controller", 00:22:22.789 "req_id": 1 00:22:22.789 } 00:22:22.789 Got JSON-RPC error response 00:22:22.789 response: 00:22:22.789 { 00:22:22.789 "code": -5, 00:22:22.789 "message": "Input/output error" 00:22:22.789 } 00:22:22.789 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:22.789 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:22.789 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:22.789 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:22.789 01:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.789 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:22.789 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.789 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:22.789 01:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1151129 00:22:22.789 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1151129 ']' 00:22:22.789 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1151129 00:22:22.789 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:22.789 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:22.789 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1151129 00:22:22.789 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:22.789 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:22.790 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1151129' 00:22:22.790 killing process with pid 1151129 00:22:22.790 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1151129 00:22:22.790 01:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1151129 00:22:22.790 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:22.790 01:08:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:22.790 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:22.790 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.790 01:08:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1173709 00:22:22.790 01:08:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:22.790 01:08:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1173709 00:22:22.790 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1173709 ']' 00:22:22.790 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.048 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:23.048 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.048 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:23.048 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.048 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.048 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:23.048 01:08:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:23.048 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:23.048 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.048 01:08:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.048 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:23.048 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1173709 00:22:23.048 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1173709 ']' 00:22:23.048 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.305 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:23.305 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.305 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:23.305 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.305 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:23.305 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:23.305 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:23.305 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.306 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.564 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.564 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:23.564 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:23.564 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:23.564 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:23.564 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:23.564 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.564 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:23.564 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.564 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.564 01:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.564 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:23.564 01:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:24.497 00:22:24.497 01:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:24.497 01:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:24.497 01:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.754 01:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.754 01:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.754 01:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.754 01:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.754 01:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.754 01:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:24.754 { 00:22:24.754 "cntlid": 1, 00:22:24.754 "qid": 0, 00:22:24.754 "state": "enabled", 00:22:24.754 "thread": "nvmf_tgt_poll_group_000", 00:22:24.754 "listen_address": { 00:22:24.754 "trtype": "TCP", 00:22:24.754 "adrfam": "IPv4", 00:22:24.754 "traddr": "10.0.0.2", 00:22:24.754 "trsvcid": "4420" 00:22:24.754 }, 00:22:24.754 "peer_address": { 00:22:24.754 "trtype": "TCP", 00:22:24.754 "adrfam": "IPv4", 00:22:24.754 "traddr": "10.0.0.1", 00:22:24.754 "trsvcid": "46694" 00:22:24.754 }, 00:22:24.754 "auth": { 00:22:24.754 "state": "completed", 00:22:24.754 "digest": "sha512", 00:22:24.754 "dhgroup": "ffdhe8192" 00:22:24.754 } 00:22:24.754 } 00:22:24.754 ]' 00:22:24.754 01:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:24.754 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.754 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:24.754 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:24.754 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:24.754 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.754 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.754 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.011 01:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OGNhNzZiOWMwZmQ2YTBlN2Y3NjM2ZGUxZDYxMGMxM2Q0NTM3ZDYwY2M1OGMzYzk0YTA5OTkxZDkwNjE2NmUzNTo875w=: 00:22:25.944 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.944 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.944 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.944 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.944 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.944 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:25.944 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.944 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.944 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.944 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:25.944 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:26.201 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.201 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:26.201 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.201 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:26.201 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.201 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:26.201 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:26.201 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.201 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:26.767 request: 00:22:26.767 { 00:22:26.767 "name": "nvme0", 00:22:26.767 "trtype": "tcp", 00:22:26.767 "traddr": "10.0.0.2", 00:22:26.767 "adrfam": "ipv4", 00:22:26.767 "trsvcid": "4420", 00:22:26.767 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:26.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:26.767 "prchk_reftag": false, 00:22:26.767 "prchk_guard": false, 00:22:26.767 "hdgst": false, 00:22:26.767 "ddgst": false, 00:22:26.767 "dhchap_key": "key3", 00:22:26.767 "method": "bdev_nvme_attach_controller", 00:22:26.767 "req_id": 1 00:22:26.767 } 00:22:26.767 Got JSON-RPC error response 00:22:26.767 response: 00:22:26.767 { 00:22:26.767 "code": -5, 00:22:26.767 "message": "Input/output error" 00:22:26.767 } 00:22:26.767 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:26.767 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:26.767 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:26.767 01:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:26.767 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:26.767 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:26.767 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:26.767 01:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:27.025 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.025 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:27.025 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.025 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:27.025 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.025 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:27.025 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.025 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.025 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.289 request: 00:22:27.290 { 00:22:27.290 "name": "nvme0", 00:22:27.290 "trtype": "tcp", 00:22:27.290 "traddr": "10.0.0.2", 00:22:27.290 "adrfam": "ipv4", 00:22:27.290 "trsvcid": "4420", 00:22:27.290 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:27.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:27.290 "prchk_reftag": false, 00:22:27.290 "prchk_guard": false, 00:22:27.290 "hdgst": false, 00:22:27.290 "ddgst": false, 00:22:27.290 "dhchap_key": "key3", 00:22:27.290 "method": "bdev_nvme_attach_controller", 00:22:27.290 "req_id": 1 00:22:27.290 } 00:22:27.290 Got JSON-RPC error response 00:22:27.290 response: 00:22:27.290 { 00:22:27.290 "code": -5, 00:22:27.290 "message": "Input/output error" 00:22:27.290 } 00:22:27.290 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:27.290 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:27.290 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:27.290 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:27.290 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:27.290 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:27.290 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:27.290 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:27.290 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:27.290 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:27.549 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.549 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.549 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.549 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.549 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.549 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.549 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.549 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.549 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:27.549 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:27.549 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:27.549 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:27.549 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.549 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:27.549 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:27.549 01:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:27.549 01:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:27.807 request: 00:22:27.807 { 00:22:27.807 "name": "nvme0", 00:22:27.807 "trtype": "tcp", 00:22:27.807 "traddr": "10.0.0.2", 00:22:27.807 "adrfam": "ipv4", 00:22:27.807 "trsvcid": "4420", 00:22:27.807 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:27.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:27.807 "prchk_reftag": false, 00:22:27.807 "prchk_guard": false, 00:22:27.807 "hdgst": false, 00:22:27.807 "ddgst": false, 00:22:27.807 "dhchap_key": "key0", 00:22:27.807 "dhchap_ctrlr_key": "key1", 00:22:27.807 "method": "bdev_nvme_attach_controller", 00:22:27.807 "req_id": 1 00:22:27.807 } 00:22:27.807 Got JSON-RPC error response 00:22:27.807 response: 00:22:27.807 { 00:22:27.807 "code": -5, 00:22:27.807 "message": "Input/output error" 00:22:27.807 } 00:22:27.807 01:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:27.807 01:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:27.807 01:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:27.807 01:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:27.807 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:27.807 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:28.065 00:22:28.065 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:28.065 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:28.065 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.322 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.322 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.322 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.580 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:28.580 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:28.580 01:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1151182 00:22:28.580 01:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1151182 ']' 00:22:28.580 01:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1151182 00:22:28.580 01:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:28.580 01:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:28.580 01:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1151182 00:22:28.580 01:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:28.580 01:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:28.580 01:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1151182' 00:22:28.580 killing process with pid 1151182 00:22:28.580 01:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1151182 00:22:28.580 01:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1151182 00:22:28.838 01:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:28.838 01:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:28.838 01:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:28.838 01:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:28.838 01:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:28.838 01:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:28.838 01:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:28.838 rmmod nvme_tcp 00:22:28.838 rmmod nvme_fabrics 00:22:28.838 rmmod nvme_keyring 00:22:29.096 01:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:29.096 01:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:29.096 01:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:29.096 01:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1173709 ']' 00:22:29.096 01:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1173709 00:22:29.096 01:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1173709 ']' 00:22:29.096 01:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1173709 00:22:29.096 01:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:29.096 01:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:29.096 01:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1173709 00:22:29.096 01:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:29.096 01:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:29.096 01:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1173709' 00:22:29.096 killing process with pid 1173709 00:22:29.096 01:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1173709 00:22:29.096 01:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1173709 00:22:29.355 01:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:29.355 01:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:29.355 01:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:29.355 01:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:29.355 01:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:29.355 01:08:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.355 01:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:29.355 01:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.255 01:08:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:31.255 01:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.bZ3 /tmp/spdk.key-sha256.Btw /tmp/spdk.key-sha384.y2g /tmp/spdk.key-sha512.2BL /tmp/spdk.key-sha512.nud /tmp/spdk.key-sha384.3Ex /tmp/spdk.key-sha256.SO8 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:31.255 00:22:31.255 real 3m8.652s 00:22:31.255 user 7m19.243s 00:22:31.255 sys 0m24.645s 00:22:31.255 01:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:31.255 01:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.255 ************************************ 00:22:31.255 END TEST nvmf_auth_target 00:22:31.255 ************************************ 00:22:31.255 01:08:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:31.255 01:08:20 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:31.255 01:08:20 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:31.255 01:08:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:31.255 01:08:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:31.255 01:08:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:31.255 ************************************ 00:22:31.255 START TEST nvmf_bdevio_no_huge 00:22:31.255 ************************************ 00:22:31.255 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:31.514 * Looking for test storage... 00:22:31.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:31.514 01:08:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:33.416 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:33.416 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:33.416 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:33.417 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:33.417 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:33.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:22:33.417 00:22:33.417 --- 10.0.0.2 ping statistics --- 00:22:33.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.417 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:22:33.417 00:22:33.417 --- 10.0.0.1 ping statistics --- 00:22:33.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.417 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1176403 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1176403 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1176403 ']' 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.417 01:08:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.417 [2024-07-14 01:08:22.788143] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:33.417 [2024-07-14 01:08:22.788258] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:33.676 [2024-07-14 01:08:22.859389] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:33.676 [2024-07-14 01:08:22.943955] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.676 [2024-07-14 01:08:22.944006] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.676 [2024-07-14 01:08:22.944028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.676 [2024-07-14 01:08:22.944040] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.676 [2024-07-14 01:08:22.944051] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.676 [2024-07-14 01:08:22.944142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:33.676 [2024-07-14 01:08:22.944240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:33.676 [2024-07-14 01:08:22.944243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.676 [2024-07-14 01:08:22.944191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:33.676 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:33.676 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:22:33.676 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.676 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:33.676 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.676 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.676 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:33.676 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.676 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.676 [2024-07-14 01:08:23.068979] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.676 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.676 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:33.676 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.676 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.676 Malloc0 00:22:33.676 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.676 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:33.676 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.676 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.938 [2024-07-14 01:08:23.107239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:33.938 { 00:22:33.938 "params": { 00:22:33.938 "name": "Nvme$subsystem", 00:22:33.938 "trtype": "$TEST_TRANSPORT", 00:22:33.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.938 "adrfam": "ipv4", 00:22:33.938 "trsvcid": "$NVMF_PORT", 00:22:33.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.938 "hdgst": ${hdgst:-false}, 00:22:33.938 "ddgst": ${ddgst:-false} 00:22:33.938 }, 00:22:33.938 "method": "bdev_nvme_attach_controller" 00:22:33.938 } 00:22:33.938 EOF 00:22:33.938 )") 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:33.938 01:08:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:33.938 "params": { 00:22:33.938 "name": "Nvme1", 00:22:33.938 "trtype": "tcp", 00:22:33.938 "traddr": "10.0.0.2", 00:22:33.938 "adrfam": "ipv4", 00:22:33.938 "trsvcid": "4420", 00:22:33.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.938 "hdgst": false, 00:22:33.938 "ddgst": false 00:22:33.938 }, 00:22:33.938 "method": "bdev_nvme_attach_controller" 00:22:33.938 }' 00:22:33.939 [2024-07-14 01:08:23.152619] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:33.939 [2024-07-14 01:08:23.152709] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1176438 ] 00:22:33.939 [2024-07-14 01:08:23.214015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:33.939 [2024-07-14 01:08:23.300995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.939 [2024-07-14 01:08:23.301049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.939 [2024-07-14 01:08:23.301052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.505 I/O targets: 00:22:34.505 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:34.505 00:22:34.505 00:22:34.505 CUnit - A unit testing framework for C - Version 2.1-3 00:22:34.505 http://cunit.sourceforge.net/ 00:22:34.505 00:22:34.505 00:22:34.505 Suite: bdevio tests on: Nvme1n1 00:22:34.505 Test: blockdev write read block ...passed 00:22:34.505 Test: blockdev write zeroes read block ...passed 00:22:34.505 Test: blockdev write zeroes read no split ...passed 00:22:34.505 Test: blockdev write zeroes read split ...passed 00:22:34.505 Test: blockdev write zeroes read split partial ...passed 00:22:34.505 Test: blockdev reset ...[2024-07-14 01:08:23.841362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:34.505 [2024-07-14 01:08:23.841486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60c4e0 (9): Bad file descriptor 00:22:34.505 [2024-07-14 01:08:23.901973] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:34.505 passed 00:22:34.505 Test: blockdev write read 8 blocks ...passed 00:22:34.505 Test: blockdev write read size > 128k ...passed 00:22:34.505 Test: blockdev write read invalid size ...passed 00:22:34.782 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:34.782 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:34.782 Test: blockdev write read max offset ...passed 00:22:34.782 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:34.782 Test: blockdev writev readv 8 blocks ...passed 00:22:34.782 Test: blockdev writev readv 30 x 1block ...passed 00:22:34.782 Test: blockdev writev readv block ...passed 00:22:34.782 Test: blockdev writev readv size > 128k ...passed 00:22:34.782 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:34.782 Test: blockdev comparev and writev ...[2024-07-14 01:08:24.078415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.782 [2024-07-14 01:08:24.078452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:34.782 [2024-07-14 01:08:24.078476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.782 [2024-07-14 01:08:24.078493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:34.782 [2024-07-14 01:08:24.078879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.782 [2024-07-14 01:08:24.078904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:34.782 [2024-07-14 01:08:24.078925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.782 [2024-07-14 01:08:24.078942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:34.783 [2024-07-14 01:08:24.079317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.783 [2024-07-14 01:08:24.079342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:34.783 [2024-07-14 01:08:24.079365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.783 [2024-07-14 01:08:24.079382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:34.783 [2024-07-14 01:08:24.079769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.783 [2024-07-14 01:08:24.079794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:34.783 [2024-07-14 01:08:24.079817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:34.783 [2024-07-14 01:08:24.079833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:34.783 passed 00:22:34.783 Test: blockdev nvme passthru rw ...passed 00:22:34.783 Test: blockdev nvme passthru vendor specific ...[2024-07-14 01:08:24.163228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:34.783 [2024-07-14 01:08:24.163255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:34.783 [2024-07-14 01:08:24.163456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:34.783 [2024-07-14 01:08:24.163479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:34.783 [2024-07-14 01:08:24.163675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:34.783 [2024-07-14 01:08:24.163698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:34.783 [2024-07-14 01:08:24.163900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:34.783 [2024-07-14 01:08:24.163923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:34.783 passed 00:22:34.783 Test: blockdev nvme admin passthru ...passed 00:22:35.041 Test: blockdev copy ...passed 00:22:35.041 00:22:35.041 Run Summary: Type Total Ran Passed Failed Inactive 00:22:35.041 suites 1 1 n/a 0 0 00:22:35.041 tests 23 23 23 0 0 00:22:35.041 asserts 152 152 152 0 n/a 00:22:35.041 00:22:35.041 Elapsed time = 1.202 seconds 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:35.300 rmmod nvme_tcp 00:22:35.300 rmmod nvme_fabrics 00:22:35.300 rmmod nvme_keyring 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1176403 ']' 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1176403 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1176403 ']' 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1176403 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1176403 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1176403' 00:22:35.300 killing process with pid 1176403 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1176403 00:22:35.300 01:08:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1176403 00:22:35.867 01:08:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:35.867 01:08:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:35.867 01:08:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:35.867 01:08:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.867 01:08:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:35.867 01:08:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.867 01:08:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.867 01:08:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.811 01:08:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:37.811 00:22:37.811 real 0m6.430s 00:22:37.811 user 0m11.182s 00:22:37.811 sys 0m2.420s 00:22:37.811 01:08:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:37.811 01:08:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:37.811 ************************************ 00:22:37.811 END TEST nvmf_bdevio_no_huge 00:22:37.811 ************************************ 00:22:37.811 01:08:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:37.811 01:08:27 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:37.811 01:08:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:37.811 01:08:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.811 01:08:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:37.811 ************************************ 00:22:37.811 START TEST nvmf_tls 00:22:37.811 ************************************ 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:37.811 * Looking for test storage... 00:22:37.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:37.811 01:08:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:39.713 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:39.713 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:39.713 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:39.713 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.713 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.714 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.714 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:39.714 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.714 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.714 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:39.714 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.714 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.714 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:39.714 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:39.714 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.714 01:08:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:39.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:22:39.714 00:22:39.714 --- 10.0.0.2 ping statistics --- 00:22:39.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.714 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:22:39.714 00:22:39.714 --- 10.0.0.1 ping statistics --- 00:22:39.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.714 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1178507 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1178507 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1178507 ']' 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.714 01:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.972 [2024-07-14 01:08:29.145527] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:39.972 [2024-07-14 01:08:29.145594] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.972 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.972 [2024-07-14 01:08:29.212710] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.972 [2024-07-14 01:08:29.304038] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.972 [2024-07-14 01:08:29.304093] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.972 [2024-07-14 01:08:29.304120] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.972 [2024-07-14 01:08:29.304132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.972 [2024-07-14 01:08:29.304158] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.972 [2024-07-14 01:08:29.304195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.972 01:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.972 01:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:39.972 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.972 01:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:39.973 01:08:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.973 01:08:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.973 01:08:29 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:39.973 01:08:29 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:40.230 true 00:22:40.230 01:08:29 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:40.230 01:08:29 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:40.489 01:08:29 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:40.489 01:08:29 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:40.489 01:08:29 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:40.747 01:08:30 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:40.747 01:08:30 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:41.005 01:08:30 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:41.005 01:08:30 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:41.005 01:08:30 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:41.263 01:08:30 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.263 01:08:30 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:41.521 01:08:30 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:41.521 01:08:30 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:41.521 01:08:30 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.521 01:08:30 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:41.780 01:08:31 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:41.780 01:08:31 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:41.780 01:08:31 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:42.037 01:08:31 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.037 01:08:31 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:42.294 01:08:31 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:42.294 01:08:31 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:42.294 01:08:31 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:42.550 01:08:31 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.550 01:08:31 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:42.807 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:42.807 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:42.807 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:42.807 01:08:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:42.807 01:08:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:42.807 01:08:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:42.807 01:08:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:42.807 01:08:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:42.807 01:08:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:42.807 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:42.807 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:42.807 01:08:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:42.807 01:08:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:42.807 01:08:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:42.808 01:08:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:42.808 01:08:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:42.808 01:08:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:42.808 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:42.808 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:42.808 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.53CRbmoznH 00:22:42.808 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:42.808 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.9LtFdPWF03 00:22:42.808 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:42.808 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:42.808 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.53CRbmoznH 00:22:42.808 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.9LtFdPWF03 00:22:42.808 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:43.065 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:43.631 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.53CRbmoznH 00:22:43.631 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.53CRbmoznH 00:22:43.631 01:08:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:43.888 [2024-07-14 01:08:33.113730] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.888 01:08:33 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:44.146 01:08:33 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:44.404 [2024-07-14 01:08:33.667197] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:44.404 [2024-07-14 01:08:33.667429] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.404 01:08:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:44.662 malloc0 00:22:44.662 01:08:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:44.919 01:08:34 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.53CRbmoznH 00:22:45.176 [2024-07-14 01:08:34.403784] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:45.176 01:08:34 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.53CRbmoznH 00:22:45.176 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.137 Initializing NVMe Controllers 00:22:55.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:55.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:55.137 Initialization complete. Launching workers. 00:22:55.137 ======================================================== 00:22:55.137 Latency(us) 00:22:55.137 Device Information : IOPS MiB/s Average min max 00:22:55.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7756.21 30.30 8253.80 1257.13 9501.37 00:22:55.138 ======================================================== 00:22:55.138 Total : 7756.21 30.30 8253.80 1257.13 9501.37 00:22:55.138 00:22:55.138 01:08:44 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.53CRbmoznH 00:22:55.138 01:08:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:55.138 01:08:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:55.138 01:08:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:55.138 01:08:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.53CRbmoznH' 00:22:55.138 01:08:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:55.138 01:08:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1180404 00:22:55.138 01:08:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:55.138 01:08:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:55.138 01:08:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1180404 /var/tmp/bdevperf.sock 00:22:55.138 01:08:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1180404 ']' 00:22:55.138 01:08:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.138 01:08:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.138 01:08:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.138 01:08:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.138 01:08:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.395 [2024-07-14 01:08:44.574136] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:55.396 [2024-07-14 01:08:44.574237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180404 ] 00:22:55.396 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.396 [2024-07-14 01:08:44.632274] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.396 [2024-07-14 01:08:44.718181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.653 01:08:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.653 01:08:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:55.653 01:08:44 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.53CRbmoznH 00:22:55.911 [2024-07-14 01:08:45.100930] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:55.911 [2024-07-14 01:08:45.101088] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:55.911 TLSTESTn1 00:22:55.911 01:08:45 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:55.911 Running I/O for 10 seconds... 00:23:08.143 00:23:08.143 Latency(us) 00:23:08.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.143 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:08.143 Verification LBA range: start 0x0 length 0x2000 00:23:08.143 TLSTESTn1 : 10.06 1815.34 7.09 0.00 0.00 70294.85 7281.78 101750.71 00:23:08.143 =================================================================================================================== 00:23:08.143 Total : 1815.34 7.09 0.00 0.00 70294.85 7281.78 101750.71 00:23:08.143 0 00:23:08.143 01:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:08.143 01:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1180404 00:23:08.143 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1180404 ']' 00:23:08.143 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1180404 00:23:08.143 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:08.143 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:08.143 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1180404 00:23:08.143 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:08.143 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:08.143 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1180404' 00:23:08.143 killing process with pid 1180404 00:23:08.143 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1180404 00:23:08.143 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.143 00:23:08.143 Latency(us) 00:23:08.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.143 =================================================================================================================== 00:23:08.143 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.143 [2024-07-14 01:08:55.422816] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:08.143 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1180404 00:23:08.143 01:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9LtFdPWF03 00:23:08.143 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:08.143 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9LtFdPWF03 00:23:08.143 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:08.143 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9LtFdPWF03 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9LtFdPWF03' 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1181717 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1181717 /var/tmp/bdevperf.sock 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1181717 ']' 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.144 [2024-07-14 01:08:55.667566] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:08.144 [2024-07-14 01:08:55.667665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1181717 ] 00:23:08.144 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.144 [2024-07-14 01:08:55.727089] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.144 [2024-07-14 01:08:55.810555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:08.144 01:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9LtFdPWF03 00:23:08.144 [2024-07-14 01:08:56.125375] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.144 [2024-07-14 01:08:56.125505] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:08.144 [2024-07-14 01:08:56.132129] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:08.144 [2024-07-14 01:08:56.132462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135dab0 (107): Transport endpoint is not connected 00:23:08.144 [2024-07-14 01:08:56.133450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135dab0 (9): Bad file descriptor 00:23:08.144 [2024-07-14 01:08:56.134450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:08.144 [2024-07-14 01:08:56.134471] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:08.144 [2024-07-14 01:08:56.134489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:08.144 request: 00:23:08.144 { 00:23:08.144 "name": "TLSTEST", 00:23:08.144 "trtype": "tcp", 00:23:08.144 "traddr": "10.0.0.2", 00:23:08.144 "adrfam": "ipv4", 00:23:08.144 "trsvcid": "4420", 00:23:08.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:08.144 "prchk_reftag": false, 00:23:08.144 "prchk_guard": false, 00:23:08.144 "hdgst": false, 00:23:08.144 "ddgst": false, 00:23:08.144 "psk": "/tmp/tmp.9LtFdPWF03", 00:23:08.144 "method": "bdev_nvme_attach_controller", 00:23:08.144 "req_id": 1 00:23:08.144 } 00:23:08.144 Got JSON-RPC error response 00:23:08.144 response: 00:23:08.144 { 00:23:08.144 "code": -5, 00:23:08.144 "message": "Input/output error" 00:23:08.144 } 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1181717 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1181717 ']' 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1181717 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1181717 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1181717' 00:23:08.144 killing process with pid 1181717 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1181717 00:23:08.144 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.144 00:23:08.144 Latency(us) 00:23:08.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.144 =================================================================================================================== 00:23:08.144 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:08.144 [2024-07-14 01:08:56.185328] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1181717 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.53CRbmoznH 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.53CRbmoznH 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.53CRbmoznH 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.53CRbmoznH' 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1181736 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1181736 /var/tmp/bdevperf.sock 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1181736 ']' 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.144 [2024-07-14 01:08:56.448127] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:08.144 [2024-07-14 01:08:56.448218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1181736 ] 00:23:08.144 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.144 [2024-07-14 01:08:56.505514] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.144 [2024-07-14 01:08:56.587968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:08.144 01:08:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.53CRbmoznH 00:23:08.144 [2024-07-14 01:08:56.924227] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.144 [2024-07-14 01:08:56.924347] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:08.144 [2024-07-14 01:08:56.934094] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:08.144 [2024-07-14 01:08:56.934126] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:08.144 [2024-07-14 01:08:56.934179] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:08.144 [2024-07-14 01:08:56.934404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7faab0 (107): Transport endpoint is not connected 00:23:08.144 [2024-07-14 01:08:56.935393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7faab0 (9): Bad file descriptor 00:23:08.144 [2024-07-14 01:08:56.936393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:08.144 [2024-07-14 01:08:56.936414] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:08.144 [2024-07-14 01:08:56.936433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:08.144 request: 00:23:08.144 { 00:23:08.144 "name": "TLSTEST", 00:23:08.144 "trtype": "tcp", 00:23:08.144 "traddr": "10.0.0.2", 00:23:08.144 "adrfam": "ipv4", 00:23:08.145 "trsvcid": "4420", 00:23:08.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.145 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:08.145 "prchk_reftag": false, 00:23:08.145 "prchk_guard": false, 00:23:08.145 "hdgst": false, 00:23:08.145 "ddgst": false, 00:23:08.145 "psk": "/tmp/tmp.53CRbmoznH", 00:23:08.145 "method": "bdev_nvme_attach_controller", 00:23:08.145 "req_id": 1 00:23:08.145 } 00:23:08.145 Got JSON-RPC error response 00:23:08.145 response: 00:23:08.145 { 00:23:08.145 "code": -5, 00:23:08.145 "message": "Input/output error" 00:23:08.145 } 00:23:08.145 01:08:56 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1181736 00:23:08.145 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1181736 ']' 00:23:08.145 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1181736 00:23:08.145 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:08.145 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:08.145 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1181736 00:23:08.145 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:08.145 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:08.145 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1181736' 00:23:08.145 killing process with pid 1181736 00:23:08.145 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1181736 00:23:08.145 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.145 00:23:08.145 Latency(us) 00:23:08.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.145 =================================================================================================================== 00:23:08.145 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:08.145 [2024-07-14 01:08:56.988532] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:08.145 01:08:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1181736 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.53CRbmoznH 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.53CRbmoznH 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.53CRbmoznH 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.53CRbmoznH' 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1181872 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1181872 /var/tmp/bdevperf.sock 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1181872 ']' 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.145 [2024-07-14 01:08:57.250568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:08.145 [2024-07-14 01:08:57.250662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1181872 ] 00:23:08.145 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.145 [2024-07-14 01:08:57.310470] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.145 [2024-07-14 01:08:57.395916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:08.145 01:08:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.53CRbmoznH 00:23:08.404 [2024-07-14 01:08:57.777326] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.404 [2024-07-14 01:08:57.777452] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:08.404 [2024-07-14 01:08:57.786280] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:08.404 [2024-07-14 01:08:57.786311] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:08.404 [2024-07-14 01:08:57.786349] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:08.404 [2024-07-14 01:08:57.787387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22abab0 (107): Transport endpoint is not connected 00:23:08.404 [2024-07-14 01:08:57.788378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22abab0 (9): Bad file descriptor 00:23:08.404 [2024-07-14 01:08:57.789379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:08.404 [2024-07-14 01:08:57.789400] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:08.404 [2024-07-14 01:08:57.789420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:08.404 request: 00:23:08.404 { 00:23:08.404 "name": "TLSTEST", 00:23:08.404 "trtype": "tcp", 00:23:08.404 "traddr": "10.0.0.2", 00:23:08.404 "adrfam": "ipv4", 00:23:08.404 "trsvcid": "4420", 00:23:08.404 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:08.404 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:08.404 "prchk_reftag": false, 00:23:08.404 "prchk_guard": false, 00:23:08.404 "hdgst": false, 00:23:08.404 "ddgst": false, 00:23:08.404 "psk": "/tmp/tmp.53CRbmoznH", 00:23:08.404 "method": "bdev_nvme_attach_controller", 00:23:08.404 "req_id": 1 00:23:08.404 } 00:23:08.404 Got JSON-RPC error response 00:23:08.404 response: 00:23:08.404 { 00:23:08.404 "code": -5, 00:23:08.404 "message": "Input/output error" 00:23:08.404 } 00:23:08.404 01:08:57 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1181872 00:23:08.404 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1181872 ']' 00:23:08.404 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1181872 00:23:08.404 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:08.404 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:08.404 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1181872 00:23:08.663 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:08.663 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:08.663 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1181872' 00:23:08.663 killing process with pid 1181872 00:23:08.663 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1181872 00:23:08.663 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.663 00:23:08.663 Latency(us) 00:23:08.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.663 =================================================================================================================== 00:23:08.663 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:08.663 [2024-07-14 01:08:57.843163] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:08.663 01:08:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1181872 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1182008 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1182008 /var/tmp/bdevperf.sock 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1182008 ']' 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.663 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.921 [2024-07-14 01:08:58.100540] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:08.921 [2024-07-14 01:08:58.100634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1182008 ] 00:23:08.921 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.921 [2024-07-14 01:08:58.158074] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.921 [2024-07-14 01:08:58.238763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.179 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.179 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:09.179 01:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:09.438 [2024-07-14 01:08:58.618539] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:09.438 [2024-07-14 01:08:58.620084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fade60 (9): Bad file descriptor 00:23:09.438 [2024-07-14 01:08:58.621080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.438 [2024-07-14 01:08:58.621102] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:09.438 [2024-07-14 01:08:58.621120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.438 request: 00:23:09.438 { 00:23:09.438 "name": "TLSTEST", 00:23:09.438 "trtype": "tcp", 00:23:09.438 "traddr": "10.0.0.2", 00:23:09.438 "adrfam": "ipv4", 00:23:09.438 "trsvcid": "4420", 00:23:09.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.438 "prchk_reftag": false, 00:23:09.438 "prchk_guard": false, 00:23:09.438 "hdgst": false, 00:23:09.438 "ddgst": false, 00:23:09.438 "method": "bdev_nvme_attach_controller", 00:23:09.438 "req_id": 1 00:23:09.438 } 00:23:09.438 Got JSON-RPC error response 00:23:09.438 response: 00:23:09.438 { 00:23:09.438 "code": -5, 00:23:09.438 "message": "Input/output error" 00:23:09.438 } 00:23:09.438 01:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1182008 00:23:09.438 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1182008 ']' 00:23:09.438 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1182008 00:23:09.438 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:09.438 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:09.438 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1182008 00:23:09.438 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:09.438 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:09.438 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1182008' 00:23:09.438 killing process with pid 1182008 00:23:09.438 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1182008 00:23:09.438 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.438 00:23:09.438 Latency(us) 00:23:09.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.438 =================================================================================================================== 00:23:09.438 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:09.438 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1182008 00:23:09.697 01:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:09.697 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:09.697 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:09.697 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:09.697 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:09.697 01:08:58 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1178507 00:23:09.697 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1178507 ']' 00:23:09.697 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1178507 00:23:09.697 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:09.697 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:09.697 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1178507 00:23:09.697 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:09.697 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:09.697 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1178507' 00:23:09.697 killing process with pid 1178507 00:23:09.697 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1178507 00:23:09.697 [2024-07-14 01:08:58.910207] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:09.697 01:08:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1178507 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Nqm5Ygls3m 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Nqm5Ygls3m 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1182157 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1182157 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1182157 ']' 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.955 01:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.956 [2024-07-14 01:08:59.236356] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:09.956 [2024-07-14 01:08:59.236448] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.956 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.956 [2024-07-14 01:08:59.298627] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.214 [2024-07-14 01:08:59.383770] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.214 [2024-07-14 01:08:59.383817] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.214 [2024-07-14 01:08:59.383841] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.214 [2024-07-14 01:08:59.383852] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.214 [2024-07-14 01:08:59.383862] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.214 [2024-07-14 01:08:59.383922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.214 01:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.214 01:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:10.214 01:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:10.214 01:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:10.214 01:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.214 01:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.214 01:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Nqm5Ygls3m 00:23:10.214 01:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Nqm5Ygls3m 00:23:10.214 01:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:10.472 [2024-07-14 01:08:59.729600] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.472 01:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:10.730 01:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:10.988 [2024-07-14 01:09:00.234996] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:10.988 [2024-07-14 01:09:00.235253] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.988 01:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:11.246 malloc0 00:23:11.246 01:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:11.504 01:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Nqm5Ygls3m 00:23:11.763 [2024-07-14 01:09:00.972132] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:11.763 01:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Nqm5Ygls3m 00:23:11.763 01:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:11.763 01:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:11.763 01:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:11.763 01:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Nqm5Ygls3m' 00:23:11.763 01:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:11.763 01:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1182390 00:23:11.763 01:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:11.763 01:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:11.763 01:09:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1182390 /var/tmp/bdevperf.sock 00:23:11.763 01:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1182390 ']' 00:23:11.763 01:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.763 01:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.763 01:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.763 01:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.763 01:09:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.763 [2024-07-14 01:09:01.035569] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:11.763 [2024-07-14 01:09:01.035664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1182390 ] 00:23:11.763 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.763 [2024-07-14 01:09:01.099181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.021 [2024-07-14 01:09:01.188567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.021 01:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.021 01:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:12.021 01:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Nqm5Ygls3m 00:23:12.279 [2024-07-14 01:09:01.529122] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.279 [2024-07-14 01:09:01.529266] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:12.279 TLSTESTn1 00:23:12.279 01:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:12.536 Running I/O for 10 seconds... 00:23:22.497 00:23:22.497 Latency(us) 00:23:22.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.497 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:22.497 Verification LBA range: start 0x0 length 0x2000 00:23:22.497 TLSTESTn1 : 10.07 1861.61 7.27 0.00 0.00 68543.70 6019.60 101750.71 00:23:22.497 =================================================================================================================== 00:23:22.497 Total : 1861.61 7.27 0.00 0.00 68543.70 6019.60 101750.71 00:23:22.497 0 00:23:22.497 01:09:11 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:22.497 01:09:11 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1182390 00:23:22.497 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1182390 ']' 00:23:22.497 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1182390 00:23:22.497 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:22.497 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:22.497 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1182390 00:23:22.497 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:22.497 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:22.497 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1182390' 00:23:22.497 killing process with pid 1182390 00:23:22.497 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1182390 00:23:22.497 Received shutdown signal, test time was about 10.000000 seconds 00:23:22.497 00:23:22.497 Latency(us) 00:23:22.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.497 =================================================================================================================== 00:23:22.497 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:22.497 [2024-07-14 01:09:11.868208] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:22.497 01:09:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1182390 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Nqm5Ygls3m 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Nqm5Ygls3m 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Nqm5Ygls3m 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Nqm5Ygls3m 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Nqm5Ygls3m' 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1184254 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1184254 /var/tmp/bdevperf.sock 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1184254 ']' 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:22.754 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.754 [2024-07-14 01:09:12.135684] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:22.754 [2024-07-14 01:09:12.135776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184254 ] 00:23:22.754 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.011 [2024-07-14 01:09:12.193845] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.011 [2024-07-14 01:09:12.278988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.011 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:23.011 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:23.011 01:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Nqm5Ygls3m 00:23:23.268 [2024-07-14 01:09:12.659320] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:23.268 [2024-07-14 01:09:12.659399] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:23.269 [2024-07-14 01:09:12.659413] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Nqm5Ygls3m 00:23:23.269 request: 00:23:23.269 { 00:23:23.269 "name": "TLSTEST", 00:23:23.269 "trtype": "tcp", 00:23:23.269 "traddr": "10.0.0.2", 00:23:23.269 "adrfam": "ipv4", 00:23:23.269 "trsvcid": "4420", 00:23:23.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:23.269 "prchk_reftag": false, 00:23:23.269 "prchk_guard": false, 00:23:23.269 "hdgst": false, 00:23:23.269 "ddgst": false, 00:23:23.269 "psk": "/tmp/tmp.Nqm5Ygls3m", 00:23:23.269 "method": "bdev_nvme_attach_controller", 00:23:23.269 "req_id": 1 00:23:23.269 } 00:23:23.269 Got JSON-RPC error response 00:23:23.269 response: 00:23:23.269 { 00:23:23.269 "code": -1, 00:23:23.269 "message": "Operation not permitted" 00:23:23.269 } 00:23:23.269 01:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1184254 00:23:23.269 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1184254 ']' 00:23:23.269 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1184254 00:23:23.269 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:23.526 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:23.526 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1184254 00:23:23.526 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:23.526 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:23.526 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1184254' 00:23:23.526 killing process with pid 1184254 00:23:23.526 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1184254 00:23:23.526 Received shutdown signal, test time was about 10.000000 seconds 00:23:23.526 00:23:23.526 Latency(us) 00:23:23.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.526 =================================================================================================================== 00:23:23.526 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:23.526 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1184254 00:23:23.526 01:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:23.527 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:23.527 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:23.527 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:23.527 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:23.527 01:09:12 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1182157 00:23:23.527 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1182157 ']' 00:23:23.527 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1182157 00:23:23.527 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:23.527 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:23.527 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1182157 00:23:23.784 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:23.784 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:23.784 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1182157' 00:23:23.784 killing process with pid 1182157 00:23:23.784 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1182157 00:23:23.784 [2024-07-14 01:09:12.958358] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:23.784 01:09:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1182157 00:23:24.042 01:09:13 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:24.042 01:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:24.042 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:24.042 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.042 01:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1184396 00:23:24.042 01:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:24.042 01:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1184396 00:23:24.042 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1184396 ']' 00:23:24.042 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.042 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.042 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.042 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.042 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.042 [2024-07-14 01:09:13.248352] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:24.042 [2024-07-14 01:09:13.248445] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.042 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.042 [2024-07-14 01:09:13.316519] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.042 [2024-07-14 01:09:13.411041] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.042 [2024-07-14 01:09:13.411116] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.042 [2024-07-14 01:09:13.411132] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.042 [2024-07-14 01:09:13.411146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.042 [2024-07-14 01:09:13.411157] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.042 [2024-07-14 01:09:13.411191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.300 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.300 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:24.300 01:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.300 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:24.300 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.300 01:09:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.300 01:09:13 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Nqm5Ygls3m 00:23:24.300 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:24.300 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Nqm5Ygls3m 00:23:24.300 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:24.300 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.300 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:24.300 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.300 01:09:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.Nqm5Ygls3m 00:23:24.300 01:09:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Nqm5Ygls3m 00:23:24.300 01:09:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:24.558 [2024-07-14 01:09:13.781366] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.558 01:09:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:24.815 01:09:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:25.074 [2024-07-14 01:09:14.250604] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:25.074 [2024-07-14 01:09:14.250856] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.074 01:09:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:25.332 malloc0 00:23:25.332 01:09:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:25.593 01:09:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Nqm5Ygls3m 00:23:25.593 [2024-07-14 01:09:14.984974] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:25.593 [2024-07-14 01:09:14.985021] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:25.593 [2024-07-14 01:09:14.985068] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:25.593 request: 00:23:25.593 { 00:23:25.593 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.593 "host": "nqn.2016-06.io.spdk:host1", 00:23:25.593 "psk": "/tmp/tmp.Nqm5Ygls3m", 00:23:25.593 "method": "nvmf_subsystem_add_host", 00:23:25.593 "req_id": 1 00:23:25.593 } 00:23:25.593 Got JSON-RPC error response 00:23:25.593 response: 00:23:25.593 { 00:23:25.593 "code": -32603, 00:23:25.593 "message": "Internal error" 00:23:25.593 } 00:23:25.593 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:25.593 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:25.593 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:25.593 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:25.593 01:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1184396 00:23:25.593 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1184396 ']' 00:23:25.593 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1184396 00:23:25.853 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:25.853 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:25.853 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1184396 00:23:25.853 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:25.853 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:25.853 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1184396' 00:23:25.853 killing process with pid 1184396 00:23:25.853 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1184396 00:23:25.853 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1184396 00:23:26.112 01:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Nqm5Ygls3m 00:23:26.112 01:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:26.113 01:09:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:26.113 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:26.113 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.113 01:09:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1184685 00:23:26.113 01:09:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:26.113 01:09:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1184685 00:23:26.113 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1184685 ']' 00:23:26.113 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.113 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.113 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.113 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.113 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.113 [2024-07-14 01:09:15.339249] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:26.113 [2024-07-14 01:09:15.339332] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.113 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.113 [2024-07-14 01:09:15.404887] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.113 [2024-07-14 01:09:15.493958] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.113 [2024-07-14 01:09:15.494008] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.113 [2024-07-14 01:09:15.494023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.113 [2024-07-14 01:09:15.494035] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.113 [2024-07-14 01:09:15.494046] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.113 [2024-07-14 01:09:15.494073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.371 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.371 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:26.371 01:09:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:26.371 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:26.371 01:09:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.371 01:09:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.371 01:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Nqm5Ygls3m 00:23:26.371 01:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Nqm5Ygls3m 00:23:26.371 01:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:26.628 [2024-07-14 01:09:15.902001] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.628 01:09:15 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:26.885 01:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:27.142 [2024-07-14 01:09:16.495546] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:27.142 [2024-07-14 01:09:16.495781] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.142 01:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:27.399 malloc0 00:23:27.400 01:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:27.656 01:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Nqm5Ygls3m 00:23:27.913 [2024-07-14 01:09:17.289443] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:27.913 01:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1184972 00:23:27.913 01:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.913 01:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.913 01:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1184972 /var/tmp/bdevperf.sock 00:23:27.913 01:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1184972 ']' 00:23:27.913 01:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.913 01:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:27.913 01:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.913 01:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:27.913 01:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.170 [2024-07-14 01:09:17.344634] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:28.170 [2024-07-14 01:09:17.344718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184972 ] 00:23:28.170 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.170 [2024-07-14 01:09:17.402931] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.170 [2024-07-14 01:09:17.488200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.427 01:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:28.427 01:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:28.427 01:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Nqm5Ygls3m 00:23:28.427 [2024-07-14 01:09:17.817803] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.427 [2024-07-14 01:09:17.817932] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:28.684 TLSTESTn1 00:23:28.684 01:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:28.941 01:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:28.941 "subsystems": [ 00:23:28.941 { 00:23:28.941 "subsystem": "keyring", 00:23:28.941 "config": [] 00:23:28.941 }, 00:23:28.941 { 00:23:28.941 "subsystem": "iobuf", 00:23:28.941 "config": [ 00:23:28.941 { 00:23:28.941 "method": "iobuf_set_options", 00:23:28.941 "params": { 00:23:28.941 "small_pool_count": 8192, 00:23:28.941 "large_pool_count": 1024, 00:23:28.941 "small_bufsize": 8192, 00:23:28.941 "large_bufsize": 135168 00:23:28.941 } 00:23:28.941 } 00:23:28.941 ] 00:23:28.941 }, 00:23:28.941 { 00:23:28.941 "subsystem": "sock", 00:23:28.942 "config": [ 00:23:28.942 { 00:23:28.942 "method": "sock_set_default_impl", 00:23:28.942 "params": { 00:23:28.942 "impl_name": "posix" 00:23:28.942 } 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "method": "sock_impl_set_options", 00:23:28.942 "params": { 00:23:28.942 "impl_name": "ssl", 00:23:28.942 "recv_buf_size": 4096, 00:23:28.942 "send_buf_size": 4096, 00:23:28.942 "enable_recv_pipe": true, 00:23:28.942 "enable_quickack": false, 00:23:28.942 "enable_placement_id": 0, 00:23:28.942 "enable_zerocopy_send_server": true, 00:23:28.942 "enable_zerocopy_send_client": false, 00:23:28.942 "zerocopy_threshold": 0, 00:23:28.942 "tls_version": 0, 00:23:28.942 "enable_ktls": false 00:23:28.942 } 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "method": "sock_impl_set_options", 00:23:28.942 "params": { 00:23:28.942 "impl_name": "posix", 00:23:28.942 "recv_buf_size": 2097152, 00:23:28.942 "send_buf_size": 2097152, 00:23:28.942 "enable_recv_pipe": true, 00:23:28.942 "enable_quickack": false, 00:23:28.942 "enable_placement_id": 0, 00:23:28.942 "enable_zerocopy_send_server": true, 00:23:28.942 "enable_zerocopy_send_client": false, 00:23:28.942 "zerocopy_threshold": 0, 00:23:28.942 "tls_version": 0, 00:23:28.942 "enable_ktls": false 00:23:28.942 } 00:23:28.942 } 00:23:28.942 ] 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "subsystem": "vmd", 00:23:28.942 "config": [] 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "subsystem": "accel", 00:23:28.942 "config": [ 00:23:28.942 { 00:23:28.942 "method": "accel_set_options", 00:23:28.942 "params": { 00:23:28.942 "small_cache_size": 128, 00:23:28.942 "large_cache_size": 16, 00:23:28.942 "task_count": 2048, 00:23:28.942 "sequence_count": 2048, 00:23:28.942 "buf_count": 2048 00:23:28.942 } 00:23:28.942 } 00:23:28.942 ] 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "subsystem": "bdev", 00:23:28.942 "config": [ 00:23:28.942 { 00:23:28.942 "method": "bdev_set_options", 00:23:28.942 "params": { 00:23:28.942 "bdev_io_pool_size": 65535, 00:23:28.942 "bdev_io_cache_size": 256, 00:23:28.942 "bdev_auto_examine": true, 00:23:28.942 "iobuf_small_cache_size": 128, 00:23:28.942 "iobuf_large_cache_size": 16 00:23:28.942 } 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "method": "bdev_raid_set_options", 00:23:28.942 "params": { 00:23:28.942 "process_window_size_kb": 1024 00:23:28.942 } 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "method": "bdev_iscsi_set_options", 00:23:28.942 "params": { 00:23:28.942 "timeout_sec": 30 00:23:28.942 } 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "method": "bdev_nvme_set_options", 00:23:28.942 "params": { 00:23:28.942 "action_on_timeout": "none", 00:23:28.942 "timeout_us": 0, 00:23:28.942 "timeout_admin_us": 0, 00:23:28.942 "keep_alive_timeout_ms": 10000, 00:23:28.942 "arbitration_burst": 0, 00:23:28.942 "low_priority_weight": 0, 00:23:28.942 "medium_priority_weight": 0, 00:23:28.942 "high_priority_weight": 0, 00:23:28.942 "nvme_adminq_poll_period_us": 10000, 00:23:28.942 "nvme_ioq_poll_period_us": 0, 00:23:28.942 "io_queue_requests": 0, 00:23:28.942 "delay_cmd_submit": true, 00:23:28.942 "transport_retry_count": 4, 00:23:28.942 "bdev_retry_count": 3, 00:23:28.942 "transport_ack_timeout": 0, 00:23:28.942 "ctrlr_loss_timeout_sec": 0, 00:23:28.942 "reconnect_delay_sec": 0, 00:23:28.942 "fast_io_fail_timeout_sec": 0, 00:23:28.942 "disable_auto_failback": false, 00:23:28.942 "generate_uuids": false, 00:23:28.942 "transport_tos": 0, 00:23:28.942 "nvme_error_stat": false, 00:23:28.942 "rdma_srq_size": 0, 00:23:28.942 "io_path_stat": false, 00:23:28.942 "allow_accel_sequence": false, 00:23:28.942 "rdma_max_cq_size": 0, 00:23:28.942 "rdma_cm_event_timeout_ms": 0, 00:23:28.942 "dhchap_digests": [ 00:23:28.942 "sha256", 00:23:28.942 "sha384", 00:23:28.942 "sha512" 00:23:28.942 ], 00:23:28.942 "dhchap_dhgroups": [ 00:23:28.942 "null", 00:23:28.942 "ffdhe2048", 00:23:28.942 "ffdhe3072", 00:23:28.942 "ffdhe4096", 00:23:28.942 "ffdhe6144", 00:23:28.942 "ffdhe8192" 00:23:28.942 ] 00:23:28.942 } 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "method": "bdev_nvme_set_hotplug", 00:23:28.942 "params": { 00:23:28.942 "period_us": 100000, 00:23:28.942 "enable": false 00:23:28.942 } 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "method": "bdev_malloc_create", 00:23:28.942 "params": { 00:23:28.942 "name": "malloc0", 00:23:28.942 "num_blocks": 8192, 00:23:28.942 "block_size": 4096, 00:23:28.942 "physical_block_size": 4096, 00:23:28.942 "uuid": "793fa73a-b49b-4b3a-adb9-735544f34f0b", 00:23:28.942 "optimal_io_boundary": 0 00:23:28.942 } 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "method": "bdev_wait_for_examine" 00:23:28.942 } 00:23:28.942 ] 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "subsystem": "nbd", 00:23:28.942 "config": [] 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "subsystem": "scheduler", 00:23:28.942 "config": [ 00:23:28.942 { 00:23:28.942 "method": "framework_set_scheduler", 00:23:28.942 "params": { 00:23:28.942 "name": "static" 00:23:28.942 } 00:23:28.942 } 00:23:28.942 ] 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "subsystem": "nvmf", 00:23:28.942 "config": [ 00:23:28.942 { 00:23:28.942 "method": "nvmf_set_config", 00:23:28.942 "params": { 00:23:28.942 "discovery_filter": "match_any", 00:23:28.942 "admin_cmd_passthru": { 00:23:28.942 "identify_ctrlr": false 00:23:28.942 } 00:23:28.942 } 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "method": "nvmf_set_max_subsystems", 00:23:28.942 "params": { 00:23:28.942 "max_subsystems": 1024 00:23:28.942 } 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "method": "nvmf_set_crdt", 00:23:28.942 "params": { 00:23:28.942 "crdt1": 0, 00:23:28.942 "crdt2": 0, 00:23:28.942 "crdt3": 0 00:23:28.942 } 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "method": "nvmf_create_transport", 00:23:28.942 "params": { 00:23:28.942 "trtype": "TCP", 00:23:28.942 "max_queue_depth": 128, 00:23:28.942 "max_io_qpairs_per_ctrlr": 127, 00:23:28.942 "in_capsule_data_size": 4096, 00:23:28.942 "max_io_size": 131072, 00:23:28.942 "io_unit_size": 131072, 00:23:28.942 "max_aq_depth": 128, 00:23:28.942 "num_shared_buffers": 511, 00:23:28.942 "buf_cache_size": 4294967295, 00:23:28.942 "dif_insert_or_strip": false, 00:23:28.942 "zcopy": false, 00:23:28.942 "c2h_success": false, 00:23:28.942 "sock_priority": 0, 00:23:28.942 "abort_timeout_sec": 1, 00:23:28.942 "ack_timeout": 0, 00:23:28.942 "data_wr_pool_size": 0 00:23:28.942 } 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "method": "nvmf_create_subsystem", 00:23:28.942 "params": { 00:23:28.942 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.942 "allow_any_host": false, 00:23:28.942 "serial_number": "SPDK00000000000001", 00:23:28.942 "model_number": "SPDK bdev Controller", 00:23:28.942 "max_namespaces": 10, 00:23:28.942 "min_cntlid": 1, 00:23:28.942 "max_cntlid": 65519, 00:23:28.942 "ana_reporting": false 00:23:28.942 } 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "method": "nvmf_subsystem_add_host", 00:23:28.942 "params": { 00:23:28.942 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.942 "host": "nqn.2016-06.io.spdk:host1", 00:23:28.942 "psk": "/tmp/tmp.Nqm5Ygls3m" 00:23:28.942 } 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "method": "nvmf_subsystem_add_ns", 00:23:28.942 "params": { 00:23:28.942 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.942 "namespace": { 00:23:28.942 "nsid": 1, 00:23:28.942 "bdev_name": "malloc0", 00:23:28.942 "nguid": "793FA73AB49B4B3AADB9735544F34F0B", 00:23:28.942 "uuid": "793fa73a-b49b-4b3a-adb9-735544f34f0b", 00:23:28.942 "no_auto_visible": false 00:23:28.942 } 00:23:28.942 } 00:23:28.942 }, 00:23:28.942 { 00:23:28.942 "method": "nvmf_subsystem_add_listener", 00:23:28.942 "params": { 00:23:28.943 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.943 "listen_address": { 00:23:28.943 "trtype": "TCP", 00:23:28.943 "adrfam": "IPv4", 00:23:28.943 "traddr": "10.0.0.2", 00:23:28.943 "trsvcid": "4420" 00:23:28.943 }, 00:23:28.943 "secure_channel": true 00:23:28.943 } 00:23:28.943 } 00:23:28.943 ] 00:23:28.943 } 00:23:28.943 ] 00:23:28.943 }' 00:23:28.943 01:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:29.199 01:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:29.199 "subsystems": [ 00:23:29.199 { 00:23:29.199 "subsystem": "keyring", 00:23:29.199 "config": [] 00:23:29.199 }, 00:23:29.199 { 00:23:29.199 "subsystem": "iobuf", 00:23:29.199 "config": [ 00:23:29.199 { 00:23:29.199 "method": "iobuf_set_options", 00:23:29.199 "params": { 00:23:29.199 "small_pool_count": 8192, 00:23:29.199 "large_pool_count": 1024, 00:23:29.199 "small_bufsize": 8192, 00:23:29.199 "large_bufsize": 135168 00:23:29.199 } 00:23:29.199 } 00:23:29.199 ] 00:23:29.199 }, 00:23:29.199 { 00:23:29.199 "subsystem": "sock", 00:23:29.199 "config": [ 00:23:29.199 { 00:23:29.199 "method": "sock_set_default_impl", 00:23:29.199 "params": { 00:23:29.199 "impl_name": "posix" 00:23:29.199 } 00:23:29.199 }, 00:23:29.199 { 00:23:29.199 "method": "sock_impl_set_options", 00:23:29.199 "params": { 00:23:29.199 "impl_name": "ssl", 00:23:29.199 "recv_buf_size": 4096, 00:23:29.199 "send_buf_size": 4096, 00:23:29.199 "enable_recv_pipe": true, 00:23:29.199 "enable_quickack": false, 00:23:29.199 "enable_placement_id": 0, 00:23:29.200 "enable_zerocopy_send_server": true, 00:23:29.200 "enable_zerocopy_send_client": false, 00:23:29.200 "zerocopy_threshold": 0, 00:23:29.200 "tls_version": 0, 00:23:29.200 "enable_ktls": false 00:23:29.200 } 00:23:29.200 }, 00:23:29.200 { 00:23:29.200 "method": "sock_impl_set_options", 00:23:29.200 "params": { 00:23:29.200 "impl_name": "posix", 00:23:29.200 "recv_buf_size": 2097152, 00:23:29.200 "send_buf_size": 2097152, 00:23:29.200 "enable_recv_pipe": true, 00:23:29.200 "enable_quickack": false, 00:23:29.200 "enable_placement_id": 0, 00:23:29.200 "enable_zerocopy_send_server": true, 00:23:29.200 "enable_zerocopy_send_client": false, 00:23:29.200 "zerocopy_threshold": 0, 00:23:29.200 "tls_version": 0, 00:23:29.200 "enable_ktls": false 00:23:29.200 } 00:23:29.200 } 00:23:29.200 ] 00:23:29.200 }, 00:23:29.200 { 00:23:29.200 "subsystem": "vmd", 00:23:29.200 "config": [] 00:23:29.200 }, 00:23:29.200 { 00:23:29.200 "subsystem": "accel", 00:23:29.200 "config": [ 00:23:29.200 { 00:23:29.200 "method": "accel_set_options", 00:23:29.200 "params": { 00:23:29.200 "small_cache_size": 128, 00:23:29.200 "large_cache_size": 16, 00:23:29.200 "task_count": 2048, 00:23:29.200 "sequence_count": 2048, 00:23:29.200 "buf_count": 2048 00:23:29.200 } 00:23:29.200 } 00:23:29.200 ] 00:23:29.200 }, 00:23:29.200 { 00:23:29.200 "subsystem": "bdev", 00:23:29.200 "config": [ 00:23:29.200 { 00:23:29.200 "method": "bdev_set_options", 00:23:29.200 "params": { 00:23:29.200 "bdev_io_pool_size": 65535, 00:23:29.200 "bdev_io_cache_size": 256, 00:23:29.200 "bdev_auto_examine": true, 00:23:29.200 "iobuf_small_cache_size": 128, 00:23:29.200 "iobuf_large_cache_size": 16 00:23:29.200 } 00:23:29.200 }, 00:23:29.200 { 00:23:29.200 "method": "bdev_raid_set_options", 00:23:29.200 "params": { 00:23:29.200 "process_window_size_kb": 1024 00:23:29.200 } 00:23:29.200 }, 00:23:29.200 { 00:23:29.200 "method": "bdev_iscsi_set_options", 00:23:29.200 "params": { 00:23:29.200 "timeout_sec": 30 00:23:29.200 } 00:23:29.200 }, 00:23:29.200 { 00:23:29.200 "method": "bdev_nvme_set_options", 00:23:29.200 "params": { 00:23:29.200 "action_on_timeout": "none", 00:23:29.200 "timeout_us": 0, 00:23:29.200 "timeout_admin_us": 0, 00:23:29.200 "keep_alive_timeout_ms": 10000, 00:23:29.200 "arbitration_burst": 0, 00:23:29.200 "low_priority_weight": 0, 00:23:29.200 "medium_priority_weight": 0, 00:23:29.200 "high_priority_weight": 0, 00:23:29.200 "nvme_adminq_poll_period_us": 10000, 00:23:29.200 "nvme_ioq_poll_period_us": 0, 00:23:29.200 "io_queue_requests": 512, 00:23:29.200 "delay_cmd_submit": true, 00:23:29.200 "transport_retry_count": 4, 00:23:29.200 "bdev_retry_count": 3, 00:23:29.200 "transport_ack_timeout": 0, 00:23:29.200 "ctrlr_loss_timeout_sec": 0, 00:23:29.200 "reconnect_delay_sec": 0, 00:23:29.200 "fast_io_fail_timeout_sec": 0, 00:23:29.200 "disable_auto_failback": false, 00:23:29.200 "generate_uuids": false, 00:23:29.200 "transport_tos": 0, 00:23:29.200 "nvme_error_stat": false, 00:23:29.200 "rdma_srq_size": 0, 00:23:29.200 "io_path_stat": false, 00:23:29.200 "allow_accel_sequence": false, 00:23:29.200 "rdma_max_cq_size": 0, 00:23:29.200 "rdma_cm_event_timeout_ms": 0, 00:23:29.200 "dhchap_digests": [ 00:23:29.200 "sha256", 00:23:29.200 "sha384", 00:23:29.200 "sha512" 00:23:29.200 ], 00:23:29.200 "dhchap_dhgroups": [ 00:23:29.200 "null", 00:23:29.200 "ffdhe2048", 00:23:29.200 "ffdhe3072", 00:23:29.200 "ffdhe4096", 00:23:29.200 "ffdhe6144", 00:23:29.200 "ffdhe8192" 00:23:29.200 ] 00:23:29.200 } 00:23:29.200 }, 00:23:29.200 { 00:23:29.200 "method": "bdev_nvme_attach_controller", 00:23:29.200 "params": { 00:23:29.200 "name": "TLSTEST", 00:23:29.200 "trtype": "TCP", 00:23:29.200 "adrfam": "IPv4", 00:23:29.200 "traddr": "10.0.0.2", 00:23:29.200 "trsvcid": "4420", 00:23:29.200 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.200 "prchk_reftag": false, 00:23:29.200 "prchk_guard": false, 00:23:29.200 "ctrlr_loss_timeout_sec": 0, 00:23:29.200 "reconnect_delay_sec": 0, 00:23:29.200 "fast_io_fail_timeout_sec": 0, 00:23:29.200 "psk": "/tmp/tmp.Nqm5Ygls3m", 00:23:29.200 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.200 "hdgst": false, 00:23:29.200 "ddgst": false 00:23:29.200 } 00:23:29.200 }, 00:23:29.200 { 00:23:29.200 "method": "bdev_nvme_set_hotplug", 00:23:29.200 "params": { 00:23:29.200 "period_us": 100000, 00:23:29.200 "enable": false 00:23:29.200 } 00:23:29.200 }, 00:23:29.200 { 00:23:29.200 "method": "bdev_wait_for_examine" 00:23:29.200 } 00:23:29.200 ] 00:23:29.200 }, 00:23:29.200 { 00:23:29.200 "subsystem": "nbd", 00:23:29.200 "config": [] 00:23:29.200 } 00:23:29.200 ] 00:23:29.200 }' 00:23:29.200 01:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1184972 00:23:29.200 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1184972 ']' 00:23:29.200 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1184972 00:23:29.200 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:29.200 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.200 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1184972 00:23:29.200 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:29.200 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:29.200 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1184972' 00:23:29.200 killing process with pid 1184972 00:23:29.200 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1184972 00:23:29.200 Received shutdown signal, test time was about 10.000000 seconds 00:23:29.200 00:23:29.200 Latency(us) 00:23:29.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.200 =================================================================================================================== 00:23:29.200 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:29.200 [2024-07-14 01:09:18.581355] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:29.200 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1184972 00:23:29.457 01:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1184685 00:23:29.457 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1184685 ']' 00:23:29.457 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1184685 00:23:29.457 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:29.457 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:29.457 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1184685 00:23:29.457 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:29.457 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:29.457 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1184685' 00:23:29.457 killing process with pid 1184685 00:23:29.457 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1184685 00:23:29.457 [2024-07-14 01:09:18.835045] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:29.457 01:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1184685 00:23:29.716 01:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:29.716 01:09:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:29.716 01:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:29.716 "subsystems": [ 00:23:29.716 { 00:23:29.716 "subsystem": "keyring", 00:23:29.716 "config": [] 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "subsystem": "iobuf", 00:23:29.716 "config": [ 00:23:29.716 { 00:23:29.716 "method": "iobuf_set_options", 00:23:29.716 "params": { 00:23:29.716 "small_pool_count": 8192, 00:23:29.716 "large_pool_count": 1024, 00:23:29.716 "small_bufsize": 8192, 00:23:29.716 "large_bufsize": 135168 00:23:29.716 } 00:23:29.716 } 00:23:29.716 ] 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "subsystem": "sock", 00:23:29.716 "config": [ 00:23:29.716 { 00:23:29.716 "method": "sock_set_default_impl", 00:23:29.716 "params": { 00:23:29.716 "impl_name": "posix" 00:23:29.716 } 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "method": "sock_impl_set_options", 00:23:29.716 "params": { 00:23:29.716 "impl_name": "ssl", 00:23:29.716 "recv_buf_size": 4096, 00:23:29.716 "send_buf_size": 4096, 00:23:29.716 "enable_recv_pipe": true, 00:23:29.716 "enable_quickack": false, 00:23:29.716 "enable_placement_id": 0, 00:23:29.716 "enable_zerocopy_send_server": true, 00:23:29.716 "enable_zerocopy_send_client": false, 00:23:29.716 "zerocopy_threshold": 0, 00:23:29.716 "tls_version": 0, 00:23:29.716 "enable_ktls": false 00:23:29.716 } 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "method": "sock_impl_set_options", 00:23:29.716 "params": { 00:23:29.716 "impl_name": "posix", 00:23:29.716 "recv_buf_size": 2097152, 00:23:29.716 "send_buf_size": 2097152, 00:23:29.716 "enable_recv_pipe": true, 00:23:29.716 "enable_quickack": false, 00:23:29.716 "enable_placement_id": 0, 00:23:29.716 "enable_zerocopy_send_server": true, 00:23:29.716 "enable_zerocopy_send_client": false, 00:23:29.716 "zerocopy_threshold": 0, 00:23:29.716 "tls_version": 0, 00:23:29.716 "enable_ktls": false 00:23:29.716 } 00:23:29.716 } 00:23:29.716 ] 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "subsystem": "vmd", 00:23:29.716 "config": [] 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "subsystem": "accel", 00:23:29.716 "config": [ 00:23:29.716 { 00:23:29.716 "method": "accel_set_options", 00:23:29.716 "params": { 00:23:29.716 "small_cache_size": 128, 00:23:29.716 "large_cache_size": 16, 00:23:29.716 "task_count": 2048, 00:23:29.716 "sequence_count": 2048, 00:23:29.716 "buf_count": 2048 00:23:29.716 } 00:23:29.716 } 00:23:29.716 ] 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "subsystem": "bdev", 00:23:29.716 "config": [ 00:23:29.716 { 00:23:29.716 "method": "bdev_set_options", 00:23:29.716 "params": { 00:23:29.716 "bdev_io_pool_size": 65535, 00:23:29.716 "bdev_io_cache_size": 256, 00:23:29.716 "bdev_auto_examine": true, 00:23:29.716 "iobuf_small_cache_size": 128, 00:23:29.716 "iobuf_large_cache_size": 16 00:23:29.716 } 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "method": "bdev_raid_set_options", 00:23:29.716 "params": { 00:23:29.716 "process_window_size_kb": 1024 00:23:29.716 } 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "method": "bdev_iscsi_set_options", 00:23:29.716 "params": { 00:23:29.716 "timeout_sec": 30 00:23:29.716 } 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "method": "bdev_nvme_set_options", 00:23:29.716 "params": { 00:23:29.716 "action_on_timeout": "none", 00:23:29.716 "timeout_us": 0, 00:23:29.716 "timeout_admin_us": 0, 00:23:29.716 "keep_alive_timeout_ms": 10000, 00:23:29.716 "arbitration_burst": 0, 00:23:29.716 "low_priority_weight": 0, 00:23:29.716 "medium_priority_weight": 0, 00:23:29.716 "high_priority_weight": 0, 00:23:29.716 "nvme_adminq_poll_period_us": 10000, 00:23:29.716 "nvme_ioq_poll_period_us": 0, 00:23:29.716 "io_queue_requests": 0, 00:23:29.716 "delay_cmd_submit": true, 00:23:29.716 "transport_retry_count": 4, 00:23:29.716 "bdev_retry_count": 3, 00:23:29.716 "transport_ack_timeout": 0, 00:23:29.716 "ctrlr_loss_timeout_sec": 0, 00:23:29.716 "reconnect_delay_sec": 0, 00:23:29.716 "fast_io_fail_timeout_sec": 0, 00:23:29.716 "disable_auto_failback": false, 00:23:29.716 "generate_uuids": false, 00:23:29.716 "transport_tos": 0, 00:23:29.716 "nvme_error_stat": false, 00:23:29.716 "rdma_srq_size": 0, 00:23:29.716 "io_path_stat": false, 00:23:29.716 "allow_accel_sequence": false, 00:23:29.716 "rdma_max_cq_size": 0, 00:23:29.716 "rdma_cm_event_timeout_ms": 0, 00:23:29.716 "dhchap_digests": [ 00:23:29.716 "sha256", 00:23:29.716 "sha384", 00:23:29.716 "sha512" 00:23:29.716 ], 00:23:29.716 "dhchap_dhgroups": [ 00:23:29.716 "null", 00:23:29.716 "ffdhe2048", 00:23:29.716 "ffdhe3072", 00:23:29.716 "ffdhe4096", 00:23:29.716 "ffdhe6144", 00:23:29.716 "ffdhe8192" 00:23:29.716 ] 00:23:29.716 } 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "method": "bdev_nvme_set_hotplug", 00:23:29.716 "params": { 00:23:29.716 "period_us": 100000, 00:23:29.716 "enable": false 00:23:29.716 } 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "method": "bdev_malloc_create", 00:23:29.716 "params": { 00:23:29.716 "name": "malloc0", 00:23:29.716 "num_blocks": 8192, 00:23:29.716 "block_size": 4096, 00:23:29.716 "physical_block_size": 4096, 00:23:29.716 "uuid": "793fa73a-b49b-4b3a-adb9-735544f34f0b", 00:23:29.716 "optimal_io_boundary": 0 00:23:29.716 } 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "method": "bdev_wait_for_examine" 00:23:29.716 } 00:23:29.716 ] 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "subsystem": "nbd", 00:23:29.716 "config": [] 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "subsystem": "scheduler", 00:23:29.716 "config": [ 00:23:29.716 { 00:23:29.716 "method": "framework_set_scheduler", 00:23:29.716 "params": { 00:23:29.716 "name": "static" 00:23:29.716 } 00:23:29.716 } 00:23:29.716 ] 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "subsystem": "nvmf", 00:23:29.716 "config": [ 00:23:29.716 { 00:23:29.716 "method": "nvmf_set_config", 00:23:29.716 "params": { 00:23:29.716 "discovery_filter": "match_any", 00:23:29.716 "admin_cmd_passthru": { 00:23:29.716 "identify_ctrlr": false 00:23:29.716 } 00:23:29.716 } 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "method": "nvmf_set_max_subsystems", 00:23:29.716 "params": { 00:23:29.716 "max_subsystems": 1024 00:23:29.716 } 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "method": "nvmf_set_crdt", 00:23:29.716 "params": { 00:23:29.716 "crdt1": 0, 00:23:29.716 "crdt2": 0, 00:23:29.716 "crdt3": 0 00:23:29.716 } 00:23:29.716 }, 00:23:29.716 { 00:23:29.716 "method": "nvmf_create_transport", 00:23:29.716 "params": { 00:23:29.716 "trtype": "TCP", 00:23:29.716 "max_queue_depth": 128, 00:23:29.716 "max_io_qpairs_per_ctrlr": 127, 00:23:29.716 "in_capsule_data_size": 4096, 00:23:29.716 "max_io_size": 131072, 00:23:29.717 "io_unit_size": 131072, 00:23:29.717 "max_aq_depth": 128, 00:23:29.717 "num_shared_buffers": 511, 00:23:29.717 "buf_cache_size": 4294967295, 00:23:29.717 "dif_insert_or_strip": false, 00:23:29.717 "zcopy": false, 00:23:29.717 "c2h_success": false, 00:23:29.717 "sock_priority": 0, 00:23:29.717 "abort_timeout_sec": 1, 00:23:29.717 "ack_timeout": 0, 00:23:29.717 "data_wr_pool_size": 0 00:23:29.717 } 00:23:29.717 }, 00:23:29.717 { 00:23:29.717 "method": "nvmf_create_subsystem", 00:23:29.717 "params": { 00:23:29.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.717 "allow_any_host": false, 00:23:29.717 "serial_number": "SPDK00000000000001", 00:23:29.717 "model_number": "SPDK bdev Controller", 00:23:29.717 "max_namespaces": 10, 00:23:29.717 "min_cntlid": 1, 00:23:29.717 "max_cntlid": 65519, 00:23:29.717 "ana_reporting": false 00:23:29.717 } 00:23:29.717 }, 00:23:29.717 { 00:23:29.717 "method": "nvmf_subsystem_add_host", 00:23:29.717 "params": { 00:23:29.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.717 "host": "nqn.2016-06.io.spdk:host1", 00:23:29.717 "psk": "/tmp/tmp.Nqm5Ygls3m" 00:23:29.717 } 00:23:29.717 }, 00:23:29.717 { 00:23:29.717 "method": "nvmf_subsystem_add_ns", 00:23:29.717 "params": { 00:23:29.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.717 "namespace": { 00:23:29.717 "nsid": 1, 00:23:29.717 "bdev_name": "malloc0", 00:23:29.717 "nguid": "793FA73AB49B4B3AADB9735544F34F0B", 00:23:29.717 "uuid": "793fa73a-b49b-4b3a-adb9-735544f34f0b", 00:23:29.717 "no_auto_visible": false 00:23:29.717 } 00:23:29.717 } 00:23:29.717 }, 00:23:29.717 { 00:23:29.717 "method": "nvmf_subsystem_add_listener", 00:23:29.717 "params": { 00:23:29.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.717 "listen_address": { 00:23:29.717 "trtype": "TCP", 00:23:29.717 "adrfam": "IPv4", 00:23:29.717 "traddr": "10.0.0.2", 00:23:29.717 "trsvcid": "4420" 00:23:29.717 }, 00:23:29.717 "secure_channel": true 00:23:29.717 } 00:23:29.717 } 00:23:29.717 ] 00:23:29.717 } 00:23:29.717 ] 00:23:29.717 }' 00:23:29.717 01:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:29.717 01:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.717 01:09:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1185137 00:23:29.717 01:09:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:29.717 01:09:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1185137 00:23:29.717 01:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1185137 ']' 00:23:29.717 01:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.717 01:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.717 01:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.717 01:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.717 01:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.717 [2024-07-14 01:09:19.125357] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:29.717 [2024-07-14 01:09:19.125448] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.975 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.975 [2024-07-14 01:09:19.198578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.975 [2024-07-14 01:09:19.289051] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.975 [2024-07-14 01:09:19.289114] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.975 [2024-07-14 01:09:19.289128] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.975 [2024-07-14 01:09:19.289140] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.975 [2024-07-14 01:09:19.289150] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.975 [2024-07-14 01:09:19.289234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.233 [2024-07-14 01:09:19.522019] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.233 [2024-07-14 01:09:19.537968] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:30.233 [2024-07-14 01:09:19.554027] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:30.233 [2024-07-14 01:09:19.564028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.799 01:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.799 01:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:30.799 01:09:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:30.799 01:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:30.799 01:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.799 01:09:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.799 01:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1185282 00:23:30.800 01:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1185282 /var/tmp/bdevperf.sock 00:23:30.800 01:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1185282 ']' 00:23:30.800 01:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.800 01:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:30.800 01:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.800 01:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:30.800 "subsystems": [ 00:23:30.800 { 00:23:30.800 "subsystem": "keyring", 00:23:30.800 "config": [] 00:23:30.800 }, 00:23:30.800 { 00:23:30.800 "subsystem": "iobuf", 00:23:30.800 "config": [ 00:23:30.800 { 00:23:30.800 "method": "iobuf_set_options", 00:23:30.800 "params": { 00:23:30.800 "small_pool_count": 8192, 00:23:30.800 "large_pool_count": 1024, 00:23:30.800 "small_bufsize": 8192, 00:23:30.800 "large_bufsize": 135168 00:23:30.800 } 00:23:30.800 } 00:23:30.800 ] 00:23:30.800 }, 00:23:30.800 { 00:23:30.800 "subsystem": "sock", 00:23:30.800 "config": [ 00:23:30.800 { 00:23:30.800 "method": "sock_set_default_impl", 00:23:30.800 "params": { 00:23:30.800 "impl_name": "posix" 00:23:30.800 } 00:23:30.800 }, 00:23:30.800 { 00:23:30.800 "method": "sock_impl_set_options", 00:23:30.800 "params": { 00:23:30.800 "impl_name": "ssl", 00:23:30.800 "recv_buf_size": 4096, 00:23:30.800 "send_buf_size": 4096, 00:23:30.800 "enable_recv_pipe": true, 00:23:30.800 "enable_quickack": false, 00:23:30.800 "enable_placement_id": 0, 00:23:30.800 "enable_zerocopy_send_server": true, 00:23:30.800 "enable_zerocopy_send_client": false, 00:23:30.800 "zerocopy_threshold": 0, 00:23:30.800 "tls_version": 0, 00:23:30.800 "enable_ktls": false 00:23:30.800 } 00:23:30.800 }, 00:23:30.800 { 00:23:30.800 "method": "sock_impl_set_options", 00:23:30.800 "params": { 00:23:30.800 "impl_name": "posix", 00:23:30.800 "recv_buf_size": 2097152, 00:23:30.800 "send_buf_size": 2097152, 00:23:30.800 "enable_recv_pipe": true, 00:23:30.800 "enable_quickack": false, 00:23:30.800 "enable_placement_id": 0, 00:23:30.800 "enable_zerocopy_send_server": true, 00:23:30.800 "enable_zerocopy_send_client": false, 00:23:30.800 "zerocopy_threshold": 0, 00:23:30.800 "tls_version": 0, 00:23:30.800 "enable_ktls": false 00:23:30.800 } 00:23:30.800 } 00:23:30.800 ] 00:23:30.800 }, 00:23:30.800 { 00:23:30.800 "subsystem": "vmd", 00:23:30.800 "config": [] 00:23:30.800 }, 00:23:30.800 { 00:23:30.800 "subsystem": "accel", 00:23:30.800 "config": [ 00:23:30.800 { 00:23:30.800 "method": "accel_set_options", 00:23:30.800 "params": { 00:23:30.800 "small_cache_size": 128, 00:23:30.800 "large_cache_size": 16, 00:23:30.800 "task_count": 2048, 00:23:30.800 "sequence_count": 2048, 00:23:30.800 "buf_count": 2048 00:23:30.800 } 00:23:30.800 } 00:23:30.800 ] 00:23:30.800 }, 00:23:30.800 { 00:23:30.800 "subsystem": "bdev", 00:23:30.800 "config": [ 00:23:30.800 { 00:23:30.800 "method": "bdev_set_options", 00:23:30.800 "params": { 00:23:30.800 "bdev_io_pool_size": 65535, 00:23:30.800 "bdev_io_cache_size": 256, 00:23:30.800 "bdev_auto_examine": true, 00:23:30.800 "iobuf_small_cache_size": 128, 00:23:30.800 "iobuf_large_cache_size": 16 00:23:30.800 } 00:23:30.800 }, 00:23:30.800 { 00:23:30.800 "method": "bdev_raid_set_options", 00:23:30.800 "params": { 00:23:30.800 "process_window_size_kb": 1024 00:23:30.800 } 00:23:30.800 }, 00:23:30.800 { 00:23:30.800 "method": "bdev_iscsi_set_options", 00:23:30.800 "params": { 00:23:30.800 "timeout_sec": 30 00:23:30.800 } 00:23:30.800 }, 00:23:30.800 { 00:23:30.800 "method": "bdev_nvme_set_options", 00:23:30.800 "params": { 00:23:30.800 "action_on_timeout": "none", 00:23:30.800 "timeout_us": 0, 00:23:30.800 "timeout_admin_us": 0, 00:23:30.800 "keep_alive_timeout_ms": 10000, 00:23:30.800 "arbitration_burst": 0, 00:23:30.800 "low_priority_weight": 0, 00:23:30.800 "medium_priority_weight": 0, 00:23:30.800 "high_priority_weight": 0, 00:23:30.800 "nvme_adminq_poll_period_us": 10000, 00:23:30.800 "nvme_ioq_poll_period_us": 0, 00:23:30.800 "io_queue_requests": 512, 00:23:30.800 "delay_cmd_submit": true, 00:23:30.800 "transport_retry_count": 4, 00:23:30.800 "bdev_retry_count": 3, 00:23:30.800 "transport_ack_timeout": 0, 00:23:30.800 "ctrlr_loss_timeout_sec": 0, 00:23:30.800 "reconnect_delay_sec": 0, 00:23:30.800 "fast_io_fail_timeout_sec": 0, 00:23:30.800 "disable_auto_failback": false, 00:23:30.800 "generate_uuids": false, 00:23:30.800 "transport_tos": 0, 00:23:30.800 "nvme_error_stat": false, 00:23:30.800 "rdma_srq_size": 0, 00:23:30.800 "io_path_stat": false, 00:23:30.800 "allow_accel_sequence": false, 00:23:30.800 "rdma_max_cq_size": 0, 00:23:30.800 "rdma_cm_event_timeout_ms": 0, 00:23:30.800 "dhchap_digests": [ 00:23:30.800 "sha256", 00:23:30.800 "sha384", 00:23:30.800 "sha512" 00:23:30.800 ], 00:23:30.800 "dhchap_dhgroups": [ 00:23:30.800 "null", 00:23:30.800 "ffdhe2048", 00:23:30.800 "ffdhe3072", 00:23:30.800 "ffdhe4096", 00:23:30.800 "ffdhe6144", 00:23:30.800 "ffdhe8192" 00:23:30.800 ] 00:23:30.800 } 00:23:30.800 }, 00:23:30.800 { 00:23:30.800 "method": "bdev_nvme_attach_controller", 00:23:30.800 "params": { 00:23:30.800 "name": "TLSTEST", 00:23:30.800 "trtype": "TCP", 00:23:30.800 "adrfam": "IPv4", 00:23:30.800 "traddr": "10.0.0.2", 00:23:30.800 "trsvcid": "4420", 00:23:30.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.800 "prchk_reftag": false, 00:23:30.800 "prchk_guard": false, 00:23:30.800 "ctrlr_loss_timeout_sec": 0, 00:23:30.800 "reconnect_delay_sec": 0, 00:23:30.800 "fast_io_fail_timeout_sec": 0, 00:23:30.800 "psk": "/tmp/tmp.Nqm5Ygls3m", 00:23:30.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.800 "hdgst": false, 00:23:30.800 "ddgst": false 00:23:30.800 } 00:23:30.800 }, 00:23:30.800 { 00:23:30.800 "method": "bdev_nvme_set_hotplug", 00:23:30.800 "params": { 00:23:30.800 "period_us": 100000, 00:23:30.800 "enable": false 00:23:30.800 } 00:23:30.800 }, 00:23:30.800 { 00:23:30.800 "method": "bdev_wait_for_examine" 00:23:30.800 } 00:23:30.800 ] 00:23:30.800 }, 00:23:30.800 { 00:23:30.800 "subsystem": "nbd", 00:23:30.800 "config": [] 00:23:30.800 } 00:23:30.800 ] 00:23:30.800 }' 00:23:30.800 01:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.800 01:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.800 01:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:30.800 [2024-07-14 01:09:20.173136] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:30.800 [2024-07-14 01:09:20.173219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185282 ] 00:23:30.800 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.059 [2024-07-14 01:09:20.232666] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.059 [2024-07-14 01:09:20.320385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.317 [2024-07-14 01:09:20.489586] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.317 [2024-07-14 01:09:20.489719] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:31.882 01:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:31.882 01:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:31.882 01:09:21 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:31.882 Running I/O for 10 seconds... 00:23:44.072 00:23:44.072 Latency(us) 00:23:44.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.072 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:44.072 Verification LBA range: start 0x0 length 0x2000 00:23:44.072 TLSTESTn1 : 10.06 1830.00 7.15 0.00 0.00 69736.28 6213.78 99420.54 00:23:44.072 =================================================================================================================== 00:23:44.072 Total : 1830.00 7.15 0.00 0.00 69736.28 6213.78 99420.54 00:23:44.072 0 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1185282 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1185282 ']' 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1185282 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1185282 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1185282' 00:23:44.072 killing process with pid 1185282 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1185282 00:23:44.072 Received shutdown signal, test time was about 10.000000 seconds 00:23:44.072 00:23:44.072 Latency(us) 00:23:44.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.072 =================================================================================================================== 00:23:44.072 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:44.072 [2024-07-14 01:09:31.418050] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1185282 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1185137 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1185137 ']' 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1185137 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1185137 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1185137' 00:23:44.072 killing process with pid 1185137 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1185137 00:23:44.072 [2024-07-14 01:09:31.667028] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1185137 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:44.072 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:44.073 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.073 01:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1186725 00:23:44.073 01:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:44.073 01:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1186725 00:23:44.073 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1186725 ']' 00:23:44.073 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.073 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:44.073 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.073 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:44.073 01:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.073 [2024-07-14 01:09:31.972291] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:44.073 [2024-07-14 01:09:31.972386] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.073 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.073 [2024-07-14 01:09:32.042929] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.073 [2024-07-14 01:09:32.131418] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.073 [2024-07-14 01:09:32.131483] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.073 [2024-07-14 01:09:32.131511] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.073 [2024-07-14 01:09:32.131525] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.073 [2024-07-14 01:09:32.131537] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.073 [2024-07-14 01:09:32.131569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.073 01:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:44.073 01:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:44.073 01:09:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:44.073 01:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:44.073 01:09:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.073 01:09:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.073 01:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Nqm5Ygls3m 00:23:44.073 01:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Nqm5Ygls3m 00:23:44.073 01:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:44.073 [2024-07-14 01:09:32.493649] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.073 01:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:44.073 01:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:44.073 [2024-07-14 01:09:32.983002] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:44.073 [2024-07-14 01:09:32.983274] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.073 01:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:44.073 malloc0 00:23:44.073 01:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:44.330 01:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Nqm5Ygls3m 00:23:44.587 [2024-07-14 01:09:33.761436] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:44.587 01:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1186893 00:23:44.587 01:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:44.587 01:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:44.587 01:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1186893 /var/tmp/bdevperf.sock 00:23:44.587 01:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1186893 ']' 00:23:44.587 01:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.587 01:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:44.587 01:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.587 01:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:44.587 01:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.587 [2024-07-14 01:09:33.819976] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:44.587 [2024-07-14 01:09:33.820064] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186893 ] 00:23:44.587 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.587 [2024-07-14 01:09:33.883435] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.587 [2024-07-14 01:09:33.973822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.845 01:09:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:44.845 01:09:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:44.845 01:09:34 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Nqm5Ygls3m 00:23:45.103 01:09:34 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:45.360 [2024-07-14 01:09:34.593342] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:45.360 nvme0n1 00:23:45.360 01:09:34 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:45.618 Running I/O for 1 seconds... 00:23:46.553 00:23:46.553 Latency(us) 00:23:46.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.553 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:46.553 Verification LBA range: start 0x0 length 0x2000 00:23:46.553 nvme0n1 : 1.06 1721.43 6.72 0.00 0.00 72541.26 6310.87 103304.15 00:23:46.553 =================================================================================================================== 00:23:46.553 Total : 1721.43 6.72 0.00 0.00 72541.26 6310.87 103304.15 00:23:46.553 0 00:23:46.553 01:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1186893 00:23:46.553 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1186893 ']' 00:23:46.553 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1186893 00:23:46.553 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:46.553 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:46.553 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1186893 00:23:46.553 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:46.553 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:46.553 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1186893' 00:23:46.553 killing process with pid 1186893 00:23:46.553 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1186893 00:23:46.553 Received shutdown signal, test time was about 1.000000 seconds 00:23:46.553 00:23:46.553 Latency(us) 00:23:46.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.553 =================================================================================================================== 00:23:46.553 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:46.553 01:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1186893 00:23:46.813 01:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1186725 00:23:46.813 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1186725 ']' 00:23:46.813 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1186725 00:23:46.813 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:46.813 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:46.813 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1186725 00:23:46.813 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:46.813 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:46.813 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1186725' 00:23:46.813 killing process with pid 1186725 00:23:46.813 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1186725 00:23:46.813 [2024-07-14 01:09:36.165896] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:46.813 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1186725 00:23:47.103 01:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:47.103 01:09:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:47.103 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:47.103 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.103 01:09:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1187288 00:23:47.103 01:09:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1187288 00:23:47.103 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1187288 ']' 00:23:47.103 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.103 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:47.103 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.103 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:47.103 01:09:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:47.103 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.103 [2024-07-14 01:09:36.447063] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:47.103 [2024-07-14 01:09:36.447142] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.103 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.370 [2024-07-14 01:09:36.515010] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.371 [2024-07-14 01:09:36.603161] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.371 [2024-07-14 01:09:36.603227] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.371 [2024-07-14 01:09:36.603243] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.371 [2024-07-14 01:09:36.603257] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.371 [2024-07-14 01:09:36.603268] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.371 [2024-07-14 01:09:36.603315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.371 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.371 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:47.371 01:09:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:47.371 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:47.371 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.371 01:09:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.371 01:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:47.371 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.371 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.371 [2024-07-14 01:09:36.755395] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.371 malloc0 00:23:47.629 [2024-07-14 01:09:36.788413] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.629 [2024-07-14 01:09:36.788673] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.629 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.629 01:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1187316 00:23:47.629 01:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:47.629 01:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1187316 /var/tmp/bdevperf.sock 00:23:47.629 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1187316 ']' 00:23:47.629 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:47.629 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:47.629 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:47.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:47.629 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:47.629 01:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.629 [2024-07-14 01:09:36.858843] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:47.629 [2024-07-14 01:09:36.858953] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1187316 ] 00:23:47.629 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.629 [2024-07-14 01:09:36.919894] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.629 [2024-07-14 01:09:37.010540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.887 01:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.887 01:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:47.887 01:09:37 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Nqm5Ygls3m 00:23:48.145 01:09:37 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:48.403 [2024-07-14 01:09:37.635723] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:48.403 nvme0n1 00:23:48.403 01:09:37 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:48.403 Running I/O for 1 seconds... 00:23:49.775 00:23:49.775 Latency(us) 00:23:49.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.775 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:49.775 Verification LBA range: start 0x0 length 0x2000 00:23:49.775 nvme0n1 : 1.06 1652.30 6.45 0.00 0.00 75651.85 10243.03 118061.89 00:23:49.775 =================================================================================================================== 00:23:49.775 Total : 1652.30 6.45 0.00 0.00 75651.85 10243.03 118061.89 00:23:49.775 0 00:23:49.775 01:09:38 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:49.775 01:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.775 01:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.775 01:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.775 01:09:38 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:49.775 "subsystems": [ 00:23:49.775 { 00:23:49.775 "subsystem": "keyring", 00:23:49.775 "config": [ 00:23:49.775 { 00:23:49.775 "method": "keyring_file_add_key", 00:23:49.775 "params": { 00:23:49.775 "name": "key0", 00:23:49.775 "path": "/tmp/tmp.Nqm5Ygls3m" 00:23:49.775 } 00:23:49.775 } 00:23:49.775 ] 00:23:49.775 }, 00:23:49.775 { 00:23:49.775 "subsystem": "iobuf", 00:23:49.775 "config": [ 00:23:49.775 { 00:23:49.775 "method": "iobuf_set_options", 00:23:49.775 "params": { 00:23:49.775 "small_pool_count": 8192, 00:23:49.775 "large_pool_count": 1024, 00:23:49.775 "small_bufsize": 8192, 00:23:49.775 "large_bufsize": 135168 00:23:49.775 } 00:23:49.775 } 00:23:49.775 ] 00:23:49.775 }, 00:23:49.775 { 00:23:49.775 "subsystem": "sock", 00:23:49.775 "config": [ 00:23:49.775 { 00:23:49.775 "method": "sock_set_default_impl", 00:23:49.775 "params": { 00:23:49.775 "impl_name": "posix" 00:23:49.775 } 00:23:49.775 }, 00:23:49.775 { 00:23:49.775 "method": "sock_impl_set_options", 00:23:49.775 "params": { 00:23:49.775 "impl_name": "ssl", 00:23:49.775 "recv_buf_size": 4096, 00:23:49.775 "send_buf_size": 4096, 00:23:49.775 "enable_recv_pipe": true, 00:23:49.775 "enable_quickack": false, 00:23:49.775 "enable_placement_id": 0, 00:23:49.775 "enable_zerocopy_send_server": true, 00:23:49.775 "enable_zerocopy_send_client": false, 00:23:49.775 "zerocopy_threshold": 0, 00:23:49.775 "tls_version": 0, 00:23:49.775 "enable_ktls": false 00:23:49.775 } 00:23:49.775 }, 00:23:49.775 { 00:23:49.775 "method": "sock_impl_set_options", 00:23:49.775 "params": { 00:23:49.775 "impl_name": "posix", 00:23:49.775 "recv_buf_size": 2097152, 00:23:49.775 "send_buf_size": 2097152, 00:23:49.775 "enable_recv_pipe": true, 00:23:49.775 "enable_quickack": false, 00:23:49.775 "enable_placement_id": 0, 00:23:49.775 "enable_zerocopy_send_server": true, 00:23:49.775 "enable_zerocopy_send_client": false, 00:23:49.775 "zerocopy_threshold": 0, 00:23:49.775 "tls_version": 0, 00:23:49.775 "enable_ktls": false 00:23:49.775 } 00:23:49.775 } 00:23:49.775 ] 00:23:49.775 }, 00:23:49.775 { 00:23:49.775 "subsystem": "vmd", 00:23:49.775 "config": [] 00:23:49.775 }, 00:23:49.775 { 00:23:49.775 "subsystem": "accel", 00:23:49.775 "config": [ 00:23:49.775 { 00:23:49.775 "method": "accel_set_options", 00:23:49.775 "params": { 00:23:49.775 "small_cache_size": 128, 00:23:49.775 "large_cache_size": 16, 00:23:49.775 "task_count": 2048, 00:23:49.775 "sequence_count": 2048, 00:23:49.775 "buf_count": 2048 00:23:49.775 } 00:23:49.775 } 00:23:49.775 ] 00:23:49.775 }, 00:23:49.775 { 00:23:49.775 "subsystem": "bdev", 00:23:49.775 "config": [ 00:23:49.775 { 00:23:49.775 "method": "bdev_set_options", 00:23:49.775 "params": { 00:23:49.775 "bdev_io_pool_size": 65535, 00:23:49.775 "bdev_io_cache_size": 256, 00:23:49.775 "bdev_auto_examine": true, 00:23:49.775 "iobuf_small_cache_size": 128, 00:23:49.775 "iobuf_large_cache_size": 16 00:23:49.775 } 00:23:49.775 }, 00:23:49.775 { 00:23:49.775 "method": "bdev_raid_set_options", 00:23:49.775 "params": { 00:23:49.775 "process_window_size_kb": 1024 00:23:49.775 } 00:23:49.775 }, 00:23:49.775 { 00:23:49.775 "method": "bdev_iscsi_set_options", 00:23:49.775 "params": { 00:23:49.775 "timeout_sec": 30 00:23:49.775 } 00:23:49.775 }, 00:23:49.775 { 00:23:49.775 "method": "bdev_nvme_set_options", 00:23:49.775 "params": { 00:23:49.775 "action_on_timeout": "none", 00:23:49.775 "timeout_us": 0, 00:23:49.775 "timeout_admin_us": 0, 00:23:49.775 "keep_alive_timeout_ms": 10000, 00:23:49.775 "arbitration_burst": 0, 00:23:49.775 "low_priority_weight": 0, 00:23:49.775 "medium_priority_weight": 0, 00:23:49.775 "high_priority_weight": 0, 00:23:49.775 "nvme_adminq_poll_period_us": 10000, 00:23:49.775 "nvme_ioq_poll_period_us": 0, 00:23:49.775 "io_queue_requests": 0, 00:23:49.775 "delay_cmd_submit": true, 00:23:49.775 "transport_retry_count": 4, 00:23:49.775 "bdev_retry_count": 3, 00:23:49.775 "transport_ack_timeout": 0, 00:23:49.775 "ctrlr_loss_timeout_sec": 0, 00:23:49.775 "reconnect_delay_sec": 0, 00:23:49.775 "fast_io_fail_timeout_sec": 0, 00:23:49.775 "disable_auto_failback": false, 00:23:49.775 "generate_uuids": false, 00:23:49.775 "transport_tos": 0, 00:23:49.775 "nvme_error_stat": false, 00:23:49.775 "rdma_srq_size": 0, 00:23:49.775 "io_path_stat": false, 00:23:49.775 "allow_accel_sequence": false, 00:23:49.775 "rdma_max_cq_size": 0, 00:23:49.775 "rdma_cm_event_timeout_ms": 0, 00:23:49.775 "dhchap_digests": [ 00:23:49.775 "sha256", 00:23:49.775 "sha384", 00:23:49.775 "sha512" 00:23:49.775 ], 00:23:49.775 "dhchap_dhgroups": [ 00:23:49.775 "null", 00:23:49.775 "ffdhe2048", 00:23:49.775 "ffdhe3072", 00:23:49.775 "ffdhe4096", 00:23:49.775 "ffdhe6144", 00:23:49.775 "ffdhe8192" 00:23:49.775 ] 00:23:49.775 } 00:23:49.775 }, 00:23:49.775 { 00:23:49.775 "method": "bdev_nvme_set_hotplug", 00:23:49.775 "params": { 00:23:49.775 "period_us": 100000, 00:23:49.775 "enable": false 00:23:49.775 } 00:23:49.775 }, 00:23:49.775 { 00:23:49.775 "method": "bdev_malloc_create", 00:23:49.775 "params": { 00:23:49.775 "name": "malloc0", 00:23:49.775 "num_blocks": 8192, 00:23:49.775 "block_size": 4096, 00:23:49.775 "physical_block_size": 4096, 00:23:49.775 "uuid": "2cb6ad35-2291-4c4e-a908-08cc3ef5419b", 00:23:49.775 "optimal_io_boundary": 0 00:23:49.775 } 00:23:49.775 }, 00:23:49.775 { 00:23:49.775 "method": "bdev_wait_for_examine" 00:23:49.775 } 00:23:49.775 ] 00:23:49.775 }, 00:23:49.775 { 00:23:49.775 "subsystem": "nbd", 00:23:49.775 "config": [] 00:23:49.775 }, 00:23:49.775 { 00:23:49.775 "subsystem": "scheduler", 00:23:49.775 "config": [ 00:23:49.775 { 00:23:49.775 "method": "framework_set_scheduler", 00:23:49.775 "params": { 00:23:49.775 "name": "static" 00:23:49.775 } 00:23:49.775 } 00:23:49.775 ] 00:23:49.775 }, 00:23:49.775 { 00:23:49.775 "subsystem": "nvmf", 00:23:49.775 "config": [ 00:23:49.775 { 00:23:49.775 "method": "nvmf_set_config", 00:23:49.775 "params": { 00:23:49.775 "discovery_filter": "match_any", 00:23:49.775 "admin_cmd_passthru": { 00:23:49.775 "identify_ctrlr": false 00:23:49.775 } 00:23:49.775 } 00:23:49.775 }, 00:23:49.775 { 00:23:49.775 "method": "nvmf_set_max_subsystems", 00:23:49.775 "params": { 00:23:49.775 "max_subsystems": 1024 00:23:49.775 } 00:23:49.775 }, 00:23:49.775 { 00:23:49.775 "method": "nvmf_set_crdt", 00:23:49.775 "params": { 00:23:49.775 "crdt1": 0, 00:23:49.776 "crdt2": 0, 00:23:49.776 "crdt3": 0 00:23:49.776 } 00:23:49.776 }, 00:23:49.776 { 00:23:49.776 "method": "nvmf_create_transport", 00:23:49.776 "params": { 00:23:49.776 "trtype": "TCP", 00:23:49.776 "max_queue_depth": 128, 00:23:49.776 "max_io_qpairs_per_ctrlr": 127, 00:23:49.776 "in_capsule_data_size": 4096, 00:23:49.776 "max_io_size": 131072, 00:23:49.776 "io_unit_size": 131072, 00:23:49.776 "max_aq_depth": 128, 00:23:49.776 "num_shared_buffers": 511, 00:23:49.776 "buf_cache_size": 4294967295, 00:23:49.776 "dif_insert_or_strip": false, 00:23:49.776 "zcopy": false, 00:23:49.776 "c2h_success": false, 00:23:49.776 "sock_priority": 0, 00:23:49.776 "abort_timeout_sec": 1, 00:23:49.776 "ack_timeout": 0, 00:23:49.776 "data_wr_pool_size": 0 00:23:49.776 } 00:23:49.776 }, 00:23:49.776 { 00:23:49.776 "method": "nvmf_create_subsystem", 00:23:49.776 "params": { 00:23:49.776 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.776 "allow_any_host": false, 00:23:49.776 "serial_number": "00000000000000000000", 00:23:49.776 "model_number": "SPDK bdev Controller", 00:23:49.776 "max_namespaces": 32, 00:23:49.776 "min_cntlid": 1, 00:23:49.776 "max_cntlid": 65519, 00:23:49.776 "ana_reporting": false 00:23:49.776 } 00:23:49.776 }, 00:23:49.776 { 00:23:49.776 "method": "nvmf_subsystem_add_host", 00:23:49.776 "params": { 00:23:49.776 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.776 "host": "nqn.2016-06.io.spdk:host1", 00:23:49.776 "psk": "key0" 00:23:49.776 } 00:23:49.776 }, 00:23:49.776 { 00:23:49.776 "method": "nvmf_subsystem_add_ns", 00:23:49.776 "params": { 00:23:49.776 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.776 "namespace": { 00:23:49.776 "nsid": 1, 00:23:49.776 "bdev_name": "malloc0", 00:23:49.776 "nguid": "2CB6AD3522914C4EA90808CC3EF5419B", 00:23:49.776 "uuid": "2cb6ad35-2291-4c4e-a908-08cc3ef5419b", 00:23:49.776 "no_auto_visible": false 00:23:49.776 } 00:23:49.776 } 00:23:49.776 }, 00:23:49.776 { 00:23:49.776 "method": "nvmf_subsystem_add_listener", 00:23:49.776 "params": { 00:23:49.776 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.776 "listen_address": { 00:23:49.776 "trtype": "TCP", 00:23:49.776 "adrfam": "IPv4", 00:23:49.776 "traddr": "10.0.0.2", 00:23:49.776 "trsvcid": "4420" 00:23:49.776 }, 00:23:49.776 "secure_channel": true 00:23:49.776 } 00:23:49.776 } 00:23:49.776 ] 00:23:49.776 } 00:23:49.776 ] 00:23:49.776 }' 00:23:49.776 01:09:38 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:50.034 01:09:39 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:50.034 "subsystems": [ 00:23:50.034 { 00:23:50.034 "subsystem": "keyring", 00:23:50.034 "config": [ 00:23:50.034 { 00:23:50.034 "method": "keyring_file_add_key", 00:23:50.034 "params": { 00:23:50.034 "name": "key0", 00:23:50.034 "path": "/tmp/tmp.Nqm5Ygls3m" 00:23:50.034 } 00:23:50.034 } 00:23:50.034 ] 00:23:50.034 }, 00:23:50.034 { 00:23:50.034 "subsystem": "iobuf", 00:23:50.034 "config": [ 00:23:50.034 { 00:23:50.034 "method": "iobuf_set_options", 00:23:50.034 "params": { 00:23:50.034 "small_pool_count": 8192, 00:23:50.034 "large_pool_count": 1024, 00:23:50.034 "small_bufsize": 8192, 00:23:50.034 "large_bufsize": 135168 00:23:50.034 } 00:23:50.034 } 00:23:50.034 ] 00:23:50.034 }, 00:23:50.034 { 00:23:50.034 "subsystem": "sock", 00:23:50.034 "config": [ 00:23:50.034 { 00:23:50.034 "method": "sock_set_default_impl", 00:23:50.034 "params": { 00:23:50.034 "impl_name": "posix" 00:23:50.034 } 00:23:50.034 }, 00:23:50.034 { 00:23:50.034 "method": "sock_impl_set_options", 00:23:50.034 "params": { 00:23:50.034 "impl_name": "ssl", 00:23:50.034 "recv_buf_size": 4096, 00:23:50.034 "send_buf_size": 4096, 00:23:50.034 "enable_recv_pipe": true, 00:23:50.034 "enable_quickack": false, 00:23:50.034 "enable_placement_id": 0, 00:23:50.034 "enable_zerocopy_send_server": true, 00:23:50.034 "enable_zerocopy_send_client": false, 00:23:50.034 "zerocopy_threshold": 0, 00:23:50.034 "tls_version": 0, 00:23:50.034 "enable_ktls": false 00:23:50.034 } 00:23:50.034 }, 00:23:50.034 { 00:23:50.034 "method": "sock_impl_set_options", 00:23:50.034 "params": { 00:23:50.034 "impl_name": "posix", 00:23:50.034 "recv_buf_size": 2097152, 00:23:50.034 "send_buf_size": 2097152, 00:23:50.034 "enable_recv_pipe": true, 00:23:50.034 "enable_quickack": false, 00:23:50.034 "enable_placement_id": 0, 00:23:50.034 "enable_zerocopy_send_server": true, 00:23:50.034 "enable_zerocopy_send_client": false, 00:23:50.034 "zerocopy_threshold": 0, 00:23:50.034 "tls_version": 0, 00:23:50.034 "enable_ktls": false 00:23:50.034 } 00:23:50.034 } 00:23:50.034 ] 00:23:50.034 }, 00:23:50.034 { 00:23:50.034 "subsystem": "vmd", 00:23:50.034 "config": [] 00:23:50.034 }, 00:23:50.034 { 00:23:50.034 "subsystem": "accel", 00:23:50.034 "config": [ 00:23:50.034 { 00:23:50.034 "method": "accel_set_options", 00:23:50.034 "params": { 00:23:50.034 "small_cache_size": 128, 00:23:50.034 "large_cache_size": 16, 00:23:50.034 "task_count": 2048, 00:23:50.034 "sequence_count": 2048, 00:23:50.034 "buf_count": 2048 00:23:50.034 } 00:23:50.034 } 00:23:50.034 ] 00:23:50.034 }, 00:23:50.034 { 00:23:50.034 "subsystem": "bdev", 00:23:50.034 "config": [ 00:23:50.034 { 00:23:50.034 "method": "bdev_set_options", 00:23:50.034 "params": { 00:23:50.034 "bdev_io_pool_size": 65535, 00:23:50.034 "bdev_io_cache_size": 256, 00:23:50.034 "bdev_auto_examine": true, 00:23:50.034 "iobuf_small_cache_size": 128, 00:23:50.034 "iobuf_large_cache_size": 16 00:23:50.034 } 00:23:50.034 }, 00:23:50.034 { 00:23:50.034 "method": "bdev_raid_set_options", 00:23:50.034 "params": { 00:23:50.034 "process_window_size_kb": 1024 00:23:50.034 } 00:23:50.034 }, 00:23:50.034 { 00:23:50.034 "method": "bdev_iscsi_set_options", 00:23:50.034 "params": { 00:23:50.034 "timeout_sec": 30 00:23:50.034 } 00:23:50.034 }, 00:23:50.034 { 00:23:50.034 "method": "bdev_nvme_set_options", 00:23:50.034 "params": { 00:23:50.034 "action_on_timeout": "none", 00:23:50.034 "timeout_us": 0, 00:23:50.034 "timeout_admin_us": 0, 00:23:50.034 "keep_alive_timeout_ms": 10000, 00:23:50.034 "arbitration_burst": 0, 00:23:50.034 "low_priority_weight": 0, 00:23:50.034 "medium_priority_weight": 0, 00:23:50.034 "high_priority_weight": 0, 00:23:50.034 "nvme_adminq_poll_period_us": 10000, 00:23:50.034 "nvme_ioq_poll_period_us": 0, 00:23:50.034 "io_queue_requests": 512, 00:23:50.034 "delay_cmd_submit": true, 00:23:50.034 "transport_retry_count": 4, 00:23:50.034 "bdev_retry_count": 3, 00:23:50.034 "transport_ack_timeout": 0, 00:23:50.034 "ctrlr_loss_timeout_sec": 0, 00:23:50.034 "reconnect_delay_sec": 0, 00:23:50.034 "fast_io_fail_timeout_sec": 0, 00:23:50.034 "disable_auto_failback": false, 00:23:50.034 "generate_uuids": false, 00:23:50.034 "transport_tos": 0, 00:23:50.034 "nvme_error_stat": false, 00:23:50.034 "rdma_srq_size": 0, 00:23:50.034 "io_path_stat": false, 00:23:50.034 "allow_accel_sequence": false, 00:23:50.034 "rdma_max_cq_size": 0, 00:23:50.034 "rdma_cm_event_timeout_ms": 0, 00:23:50.034 "dhchap_digests": [ 00:23:50.034 "sha256", 00:23:50.034 "sha384", 00:23:50.034 "sha512" 00:23:50.034 ], 00:23:50.034 "dhchap_dhgroups": [ 00:23:50.034 "null", 00:23:50.034 "ffdhe2048", 00:23:50.034 "ffdhe3072", 00:23:50.034 "ffdhe4096", 00:23:50.034 "ffdhe6144", 00:23:50.034 "ffdhe8192" 00:23:50.034 ] 00:23:50.034 } 00:23:50.034 }, 00:23:50.034 { 00:23:50.034 "method": "bdev_nvme_attach_controller", 00:23:50.034 "params": { 00:23:50.034 "name": "nvme0", 00:23:50.034 "trtype": "TCP", 00:23:50.034 "adrfam": "IPv4", 00:23:50.034 "traddr": "10.0.0.2", 00:23:50.034 "trsvcid": "4420", 00:23:50.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.034 "prchk_reftag": false, 00:23:50.034 "prchk_guard": false, 00:23:50.034 "ctrlr_loss_timeout_sec": 0, 00:23:50.034 "reconnect_delay_sec": 0, 00:23:50.034 "fast_io_fail_timeout_sec": 0, 00:23:50.034 "psk": "key0", 00:23:50.034 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:50.034 "hdgst": false, 00:23:50.034 "ddgst": false 00:23:50.034 } 00:23:50.034 }, 00:23:50.034 { 00:23:50.034 "method": "bdev_nvme_set_hotplug", 00:23:50.034 "params": { 00:23:50.034 "period_us": 100000, 00:23:50.034 "enable": false 00:23:50.034 } 00:23:50.034 }, 00:23:50.034 { 00:23:50.034 "method": "bdev_enable_histogram", 00:23:50.034 "params": { 00:23:50.034 "name": "nvme0n1", 00:23:50.034 "enable": true 00:23:50.034 } 00:23:50.034 }, 00:23:50.034 { 00:23:50.034 "method": "bdev_wait_for_examine" 00:23:50.034 } 00:23:50.034 ] 00:23:50.034 }, 00:23:50.034 { 00:23:50.034 "subsystem": "nbd", 00:23:50.034 "config": [] 00:23:50.034 } 00:23:50.034 ] 00:23:50.034 }' 00:23:50.034 01:09:39 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1187316 00:23:50.034 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1187316 ']' 00:23:50.034 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1187316 00:23:50.034 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:50.034 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:50.034 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1187316 00:23:50.034 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:50.034 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:50.034 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1187316' 00:23:50.034 killing process with pid 1187316 00:23:50.034 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1187316 00:23:50.034 Received shutdown signal, test time was about 1.000000 seconds 00:23:50.034 00:23:50.034 Latency(us) 00:23:50.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.035 =================================================================================================================== 00:23:50.035 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.035 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1187316 00:23:50.292 01:09:39 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1187288 00:23:50.292 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1187288 ']' 00:23:50.292 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1187288 00:23:50.292 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:50.292 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:50.292 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1187288 00:23:50.292 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:50.292 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:50.292 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1187288' 00:23:50.292 killing process with pid 1187288 00:23:50.292 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1187288 00:23:50.292 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1187288 00:23:50.550 01:09:39 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:50.550 01:09:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:50.550 01:09:39 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:50.550 "subsystems": [ 00:23:50.550 { 00:23:50.550 "subsystem": "keyring", 00:23:50.550 "config": [ 00:23:50.550 { 00:23:50.550 "method": "keyring_file_add_key", 00:23:50.550 "params": { 00:23:50.550 "name": "key0", 00:23:50.550 "path": "/tmp/tmp.Nqm5Ygls3m" 00:23:50.550 } 00:23:50.550 } 00:23:50.550 ] 00:23:50.550 }, 00:23:50.550 { 00:23:50.550 "subsystem": "iobuf", 00:23:50.550 "config": [ 00:23:50.550 { 00:23:50.550 "method": "iobuf_set_options", 00:23:50.550 "params": { 00:23:50.550 "small_pool_count": 8192, 00:23:50.550 "large_pool_count": 1024, 00:23:50.550 "small_bufsize": 8192, 00:23:50.550 "large_bufsize": 135168 00:23:50.550 } 00:23:50.550 } 00:23:50.550 ] 00:23:50.550 }, 00:23:50.550 { 00:23:50.550 "subsystem": "sock", 00:23:50.550 "config": [ 00:23:50.550 { 00:23:50.550 "method": "sock_set_default_impl", 00:23:50.550 "params": { 00:23:50.550 "impl_name": "posix" 00:23:50.550 } 00:23:50.550 }, 00:23:50.550 { 00:23:50.550 "method": "sock_impl_set_options", 00:23:50.550 "params": { 00:23:50.550 "impl_name": "ssl", 00:23:50.550 "recv_buf_size": 4096, 00:23:50.550 "send_buf_size": 4096, 00:23:50.550 "enable_recv_pipe": true, 00:23:50.550 "enable_quickack": false, 00:23:50.550 "enable_placement_id": 0, 00:23:50.550 "enable_zerocopy_send_server": true, 00:23:50.550 "enable_zerocopy_send_client": false, 00:23:50.550 "zerocopy_threshold": 0, 00:23:50.550 "tls_version": 0, 00:23:50.550 "enable_ktls": false 00:23:50.550 } 00:23:50.550 }, 00:23:50.550 { 00:23:50.550 "method": "sock_impl_set_options", 00:23:50.550 "params": { 00:23:50.550 "impl_name": "posix", 00:23:50.550 "recv_buf_size": 2097152, 00:23:50.550 "send_buf_size": 2097152, 00:23:50.550 "enable_recv_pipe": true, 00:23:50.550 "enable_quickack": false, 00:23:50.550 "enable_placement_id": 0, 00:23:50.550 "enable_zerocopy_send_server": true, 00:23:50.550 "enable_zerocopy_send_client": false, 00:23:50.550 "zerocopy_threshold": 0, 00:23:50.550 "tls_version": 0, 00:23:50.550 "enable_ktls": false 00:23:50.550 } 00:23:50.550 } 00:23:50.550 ] 00:23:50.550 }, 00:23:50.550 { 00:23:50.550 "subsystem": "vmd", 00:23:50.550 "config": [] 00:23:50.550 }, 00:23:50.550 { 00:23:50.550 "subsystem": "accel", 00:23:50.550 "config": [ 00:23:50.550 { 00:23:50.550 "method": "accel_set_options", 00:23:50.550 "params": { 00:23:50.550 "small_cache_size": 128, 00:23:50.550 "large_cache_size": 16, 00:23:50.551 "task_count": 2048, 00:23:50.551 "sequence_count": 2048, 00:23:50.551 "buf_count": 2048 00:23:50.551 } 00:23:50.551 } 00:23:50.551 ] 00:23:50.551 }, 00:23:50.551 { 00:23:50.551 "subsystem": "bdev", 00:23:50.551 "config": [ 00:23:50.551 { 00:23:50.551 "method": "bdev_set_options", 00:23:50.551 "params": { 00:23:50.551 "bdev_io_pool_size": 65535, 00:23:50.551 "bdev_io_cache_size": 256, 00:23:50.551 "bdev_auto_examine": true, 00:23:50.551 "iobuf_small_cache_size": 128, 00:23:50.551 "iobuf_large_cache_size": 16 00:23:50.551 } 00:23:50.551 }, 00:23:50.551 { 00:23:50.551 "method": "bdev_raid_set_options", 00:23:50.551 "params": { 00:23:50.551 "process_window_size_kb": 1024 00:23:50.551 } 00:23:50.551 }, 00:23:50.551 { 00:23:50.551 "method": "bdev_iscsi_set_options", 00:23:50.551 "params": { 00:23:50.551 "timeout_sec": 30 00:23:50.551 } 00:23:50.551 }, 00:23:50.551 { 00:23:50.551 "method": "bdev_nvme_set_options", 00:23:50.551 "params": { 00:23:50.551 "action_on_timeout": "none", 00:23:50.551 "timeout_us": 0, 00:23:50.551 "timeout_admin_us": 0, 00:23:50.551 "keep_alive_timeout_ms": 10000, 00:23:50.551 "arbitration_burst": 0, 00:23:50.551 "low_priority_weight": 0, 00:23:50.551 "medium_priority_weight": 0, 00:23:50.551 "high_priority_weight": 0, 00:23:50.551 "nvme_adminq_poll_period_us": 10000, 00:23:50.551 "nvme_ioq_poll_period_us": 0, 00:23:50.551 "io_queue_requests": 0, 00:23:50.551 "delay_cmd_submit": true, 00:23:50.551 "transport_retry_count": 4, 00:23:50.551 "bdev_retry_count": 3, 00:23:50.551 "transport_ack_timeout": 0, 00:23:50.551 "ctrlr_loss_timeout_sec": 0, 00:23:50.551 "reconnect_delay_sec": 0, 00:23:50.551 "fast_io_fail_timeout_sec": 0, 00:23:50.551 "disable_auto_failback": false, 00:23:50.551 "generate_uuids": false, 00:23:50.551 "transport_tos": 0, 00:23:50.551 "nvme_error_stat": false, 00:23:50.551 "rdma_srq_size": 0, 00:23:50.551 "io_path_stat": false, 00:23:50.551 "allow_accel_sequence": false, 00:23:50.551 "rdma_max_cq_size": 0, 00:23:50.551 "rdma_cm_event_timeout_ms": 0, 00:23:50.551 "dhchap_digests": [ 00:23:50.551 "sha256", 00:23:50.551 "sha384", 00:23:50.551 "sha512" 00:23:50.551 ], 00:23:50.551 "dhchap_dhgroups": [ 00:23:50.551 "null", 00:23:50.551 "ffdhe2048", 00:23:50.551 "ffdhe3072", 00:23:50.551 "ffdhe4096", 00:23:50.551 "ffdhe6144", 00:23:50.551 "ffdhe8192" 00:23:50.551 ] 00:23:50.551 } 00:23:50.551 }, 00:23:50.551 { 00:23:50.551 "method": "bdev_nvme_set_hotplug", 00:23:50.551 "params": { 00:23:50.551 "period_us": 100000, 00:23:50.551 "enable": false 00:23:50.551 } 00:23:50.551 }, 00:23:50.551 { 00:23:50.551 "method": "bdev_malloc_create", 00:23:50.551 "params": { 00:23:50.551 "name": "malloc0", 00:23:50.551 "num_blocks": 8192, 00:23:50.551 "block_size": 4096, 00:23:50.551 "physical_block_size": 4096, 00:23:50.551 "uuid": "2cb6ad35-2291-4c4e-a908-08cc3ef5419b", 00:23:50.551 "optimal_io_boundary": 0 00:23:50.551 } 00:23:50.551 }, 00:23:50.551 { 00:23:50.551 "method": "bdev_wait_for_examine" 00:23:50.551 } 00:23:50.551 ] 00:23:50.551 }, 00:23:50.551 { 00:23:50.551 "subsystem": "nbd", 00:23:50.551 "config": [] 00:23:50.551 }, 00:23:50.551 { 00:23:50.551 "subsystem": "scheduler", 00:23:50.551 "config": [ 00:23:50.551 { 00:23:50.551 "method": "framework_set_scheduler", 00:23:50.551 "params": { 00:23:50.551 "name": "static" 00:23:50.551 } 00:23:50.551 } 00:23:50.551 ] 00:23:50.551 }, 00:23:50.551 { 00:23:50.551 "subsystem": "nvmf", 00:23:50.551 "config": [ 00:23:50.551 { 00:23:50.551 "method": "nvmf_set_config", 00:23:50.551 "params": { 00:23:50.551 "discovery_filter": "match_any", 00:23:50.551 "admin_cmd_passthru": { 00:23:50.551 "identify_ctrlr": false 00:23:50.551 } 00:23:50.551 } 00:23:50.551 }, 00:23:50.551 { 00:23:50.551 "method": "nvmf_set_max_subsystems", 00:23:50.551 "params": { 00:23:50.551 "max_subsystems": 1024 00:23:50.551 } 00:23:50.551 }, 00:23:50.551 { 00:23:50.551 "method": "nvmf_set_crdt", 00:23:50.551 "params": { 00:23:50.551 "crdt1": 0, 00:23:50.551 "crdt2": 0, 00:23:50.551 "crdt3": 0 00:23:50.551 } 00:23:50.551 }, 00:23:50.551 { 00:23:50.551 "method": "nvmf_create_transport", 00:23:50.551 "params": { 00:23:50.551 "trtype": "TCP", 00:23:50.551 "max_queue_depth": 128, 00:23:50.551 "max_io_qpairs_per_ctrlr": 127, 00:23:50.551 "in_capsule_data_size": 4096, 00:23:50.551 "max_io_size": 131072, 00:23:50.551 "io_unit_size": 131072, 00:23:50.551 "max_aq_depth": 128, 00:23:50.551 "num_shared_buffers": 511, 00:23:50.551 "buf_cache_size": 4294967295, 00:23:50.551 "dif_insert_or_strip": false, 00:23:50.551 "zcopy": false, 00:23:50.551 "c2h_success": false, 00:23:50.551 "sock_priority": 0, 00:23:50.551 "abort_timeout_sec": 1, 00:23:50.551 "ack_timeout": 0, 00:23:50.551 "data_wr_pool_size": 0 00:23:50.551 } 00:23:50.551 }, 00:23:50.551 { 00:23:50.551 "method": "nvmf_create_subsystem", 00:23:50.551 "params": { 00:23:50.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.551 "allow_any_host": false, 00:23:50.551 "serial_number": "00000000000000000000", 00:23:50.551 "model_number": "SPDK bdev Controller", 00:23:50.551 "max_namespaces": 32, 00:23:50.551 "min_cntlid": 1, 00:23:50.551 "max_cntlid": 65519, 00:23:50.551 "ana_reporting": false 00:23:50.551 } 00:23:50.551 }, 00:23:50.551 { 00:23:50.551 "method": "nvmf_subsystem_add_host", 00:23:50.551 "params": { 00:23:50.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.551 "host": "nqn.2016-06.io.spdk:host1", 00:23:50.551 "psk": "key0" 00:23:50.551 } 00:23:50.551 }, 00:23:50.551 { 00:23:50.551 "method": "nvmf_subsystem_add_ns", 00:23:50.551 "params": { 00:23:50.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.551 "namespace": { 00:23:50.551 "nsid": 1, 00:23:50.551 "bdev_name": "malloc0", 00:23:50.551 "nguid": "2CB6AD3522914C4EA90808CC3EF5419B", 00:23:50.551 "uuid": "2cb6ad35-2291-4c4e-a908-08cc3ef5419b", 00:23:50.551 "no_auto_visible": false 00:23:50.551 } 00:23:50.551 } 00:23:50.551 }, 00:23:50.551 { 00:23:50.551 "method": "nvmf_subsystem_add_listener", 00:23:50.551 "params": { 00:23:50.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.551 "listen_address": { 00:23:50.551 "trtype": "TCP", 00:23:50.551 "adrfam": "IPv4", 00:23:50.551 "traddr": "10.0.0.2", 00:23:50.551 "trsvcid": "4420" 00:23:50.551 }, 00:23:50.551 "secure_channel": true 00:23:50.551 } 00:23:50.551 } 00:23:50.551 ] 00:23:50.551 } 00:23:50.551 ] 00:23:50.551 }' 00:23:50.551 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:50.551 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.551 01:09:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1187726 00:23:50.551 01:09:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:50.551 01:09:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1187726 00:23:50.551 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1187726 ']' 00:23:50.551 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.551 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:50.551 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.551 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:50.551 01:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.551 [2024-07-14 01:09:39.870089] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:50.551 [2024-07-14 01:09:39.870165] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.551 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.551 [2024-07-14 01:09:39.932192] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.809 [2024-07-14 01:09:40.020227] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.809 [2024-07-14 01:09:40.020300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.809 [2024-07-14 01:09:40.020330] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.809 [2024-07-14 01:09:40.020342] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.809 [2024-07-14 01:09:40.020353] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.809 [2024-07-14 01:09:40.020431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.067 [2024-07-14 01:09:40.265251] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.067 [2024-07-14 01:09:40.297244] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:51.067 [2024-07-14 01:09:40.309105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.633 01:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:51.633 01:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:51.633 01:09:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:51.633 01:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:51.633 01:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.633 01:09:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.633 01:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1187875 00:23:51.633 01:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1187875 /var/tmp/bdevperf.sock 00:23:51.634 01:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1187875 ']' 00:23:51.634 01:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.634 01:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:51.634 01:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:51.634 01:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.634 01:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:51.634 "subsystems": [ 00:23:51.634 { 00:23:51.634 "subsystem": "keyring", 00:23:51.634 "config": [ 00:23:51.634 { 00:23:51.634 "method": "keyring_file_add_key", 00:23:51.634 "params": { 00:23:51.634 "name": "key0", 00:23:51.634 "path": "/tmp/tmp.Nqm5Ygls3m" 00:23:51.634 } 00:23:51.634 } 00:23:51.634 ] 00:23:51.634 }, 00:23:51.634 { 00:23:51.634 "subsystem": "iobuf", 00:23:51.634 "config": [ 00:23:51.634 { 00:23:51.634 "method": "iobuf_set_options", 00:23:51.634 "params": { 00:23:51.634 "small_pool_count": 8192, 00:23:51.634 "large_pool_count": 1024, 00:23:51.634 "small_bufsize": 8192, 00:23:51.634 "large_bufsize": 135168 00:23:51.634 } 00:23:51.634 } 00:23:51.634 ] 00:23:51.634 }, 00:23:51.634 { 00:23:51.634 "subsystem": "sock", 00:23:51.634 "config": [ 00:23:51.634 { 00:23:51.634 "method": "sock_set_default_impl", 00:23:51.634 "params": { 00:23:51.634 "impl_name": "posix" 00:23:51.634 } 00:23:51.634 }, 00:23:51.634 { 00:23:51.634 "method": "sock_impl_set_options", 00:23:51.634 "params": { 00:23:51.634 "impl_name": "ssl", 00:23:51.634 "recv_buf_size": 4096, 00:23:51.634 "send_buf_size": 4096, 00:23:51.634 "enable_recv_pipe": true, 00:23:51.634 "enable_quickack": false, 00:23:51.634 "enable_placement_id": 0, 00:23:51.634 "enable_zerocopy_send_server": true, 00:23:51.634 "enable_zerocopy_send_client": false, 00:23:51.634 "zerocopy_threshold": 0, 00:23:51.634 "tls_version": 0, 00:23:51.634 "enable_ktls": false 00:23:51.634 } 00:23:51.634 }, 00:23:51.634 { 00:23:51.634 "method": "sock_impl_set_options", 00:23:51.634 "params": { 00:23:51.634 "impl_name": "posix", 00:23:51.634 "recv_buf_size": 2097152, 00:23:51.634 "send_buf_size": 2097152, 00:23:51.634 "enable_recv_pipe": true, 00:23:51.634 "enable_quickack": false, 00:23:51.634 "enable_placement_id": 0, 00:23:51.634 "enable_zerocopy_send_server": true, 00:23:51.634 "enable_zerocopy_send_client": false, 00:23:51.634 "zerocopy_threshold": 0, 00:23:51.634 "tls_version": 0, 00:23:51.634 "enable_ktls": false 00:23:51.634 } 00:23:51.634 } 00:23:51.634 ] 00:23:51.634 }, 00:23:51.634 { 00:23:51.634 "subsystem": "vmd", 00:23:51.634 "config": [] 00:23:51.634 }, 00:23:51.634 { 00:23:51.634 "subsystem": "accel", 00:23:51.634 "config": [ 00:23:51.634 { 00:23:51.634 "method": "accel_set_options", 00:23:51.634 "params": { 00:23:51.634 "small_cache_size": 128, 00:23:51.634 "large_cache_size": 16, 00:23:51.634 "task_count": 2048, 00:23:51.634 "sequence_count": 2048, 00:23:51.634 "buf_count": 2048 00:23:51.634 } 00:23:51.634 } 00:23:51.634 ] 00:23:51.634 }, 00:23:51.634 { 00:23:51.634 "subsystem": "bdev", 00:23:51.634 "config": [ 00:23:51.634 { 00:23:51.634 "method": "bdev_set_options", 00:23:51.634 "params": { 00:23:51.634 "bdev_io_pool_size": 65535, 00:23:51.634 "bdev_io_cache_size": 256, 00:23:51.634 "bdev_auto_examine": true, 00:23:51.634 "iobuf_small_cache_size": 128, 00:23:51.634 "iobuf_large_cache_size": 16 00:23:51.634 } 00:23:51.634 }, 00:23:51.634 { 00:23:51.634 "method": "bdev_raid_set_options", 00:23:51.634 "params": { 00:23:51.634 "process_window_size_kb": 1024 00:23:51.634 } 00:23:51.634 }, 00:23:51.634 { 00:23:51.634 "method": "bdev_iscsi_set_options", 00:23:51.634 "params": { 00:23:51.634 "timeout_sec": 30 00:23:51.634 } 00:23:51.634 }, 00:23:51.634 { 00:23:51.634 "method": "bdev_nvme_set_options", 00:23:51.634 "params": { 00:23:51.634 "action_on_timeout": "none", 00:23:51.634 "timeout_us": 0, 00:23:51.634 "timeout_admin_us": 0, 00:23:51.634 "keep_alive_timeout_ms": 10000, 00:23:51.634 "arbitration_burst": 0, 00:23:51.634 "low_priority_weight": 0, 00:23:51.634 "medium_priority_weight": 0, 00:23:51.634 "high_priority_weight": 0, 00:23:51.634 "nvme_adminq_poll_period_us": 10000, 00:23:51.634 "nvme_ioq_poll_period_us": 0, 00:23:51.634 "io_queue_requests": 512, 00:23:51.634 "delay_cmd_submit": true, 00:23:51.634 "transport_retry_count": 4, 00:23:51.634 "bdev_retry_count": 3, 00:23:51.634 "transport_ack_timeout": 0, 00:23:51.634 "ctrlr_loss_timeout_sec": 0, 00:23:51.634 "reconnect_delay_sec": 0, 00:23:51.634 "fast_io_fail_timeout_sec": 0, 00:23:51.634 "disable_auto_failback": false, 00:23:51.634 "generate_uuids": false, 00:23:51.634 "transport_tos": 0, 00:23:51.634 "nvme_error_stat": false, 00:23:51.634 "rdma_srq_size": 0, 00:23:51.634 "io_path_stat": false, 00:23:51.634 "allow_accel_sequence": false, 00:23:51.634 "rdma_max_cq_size": 0, 00:23:51.634 "rdma_cm_event_timeout_ms": 0, 00:23:51.634 "dhchap_digests": [ 00:23:51.634 "sha256", 00:23:51.634 "sha384", 00:23:51.634 "sha512" 00:23:51.634 ], 00:23:51.634 "dhchap_dhgroups": [ 00:23:51.634 "null", 00:23:51.634 "ffdhe2048", 00:23:51.634 "ffdhe3072", 00:23:51.634 "ffdhe4096", 00:23:51.634 "ffdhe6144", 00:23:51.634 "ffdhe8192" 00:23:51.634 ] 00:23:51.634 } 00:23:51.634 }, 00:23:51.634 { 00:23:51.634 "method": "bdev_nvme_attach_controller", 00:23:51.634 "params": { 00:23:51.634 "name": "nvme0", 00:23:51.634 "trtype": "TCP", 00:23:51.634 "adrfam": "IPv4", 00:23:51.634 "traddr": "10.0.0.2", 00:23:51.634 "trsvcid": "4420", 00:23:51.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.634 "prchk_reftag": false, 00:23:51.634 "prchk_guard": false, 00:23:51.634 "ctrlr_loss_timeout_sec": 0, 00:23:51.634 "reconnect_delay_sec": 0, 00:23:51.634 "fast_io_fail_timeout_sec": 0, 00:23:51.634 "psk": "key0", 00:23:51.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.634 "hdgst": false, 00:23:51.634 "ddgst": false 00:23:51.634 } 00:23:51.634 }, 00:23:51.634 { 00:23:51.634 "method": "bdev_nvme_set_hotplug", 00:23:51.634 "params": { 00:23:51.634 "period_us": 100000, 00:23:51.634 "enable": false 00:23:51.634 } 00:23:51.634 }, 00:23:51.634 { 00:23:51.634 "method": "bdev_enable_histogram", 00:23:51.634 "params": { 00:23:51.634 "name": "nvme0n1", 00:23:51.634 "enable": true 00:23:51.634 } 00:23:51.634 }, 00:23:51.634 { 00:23:51.634 "method": "bdev_wait_for_examine" 00:23:51.634 } 00:23:51.634 ] 00:23:51.634 }, 00:23:51.634 { 00:23:51.634 "subsystem": "nbd", 00:23:51.634 "config": [] 00:23:51.634 } 00:23:51.634 ] 00:23:51.634 }' 00:23:51.634 01:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:51.634 01:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.634 [2024-07-14 01:09:40.942537] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:51.634 [2024-07-14 01:09:40.942614] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1187875 ] 00:23:51.634 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.634 [2024-07-14 01:09:41.004760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.893 [2024-07-14 01:09:41.096767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.893 [2024-07-14 01:09:41.269545] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:52.826 01:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:52.826 01:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:52.826 01:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:52.826 01:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:52.826 01:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.826 01:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:53.083 Running I/O for 1 seconds... 00:23:54.026 00:23:54.026 Latency(us) 00:23:54.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.026 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:54.026 Verification LBA range: start 0x0 length 0x2000 00:23:54.026 nvme0n1 : 1.06 1748.12 6.83 0.00 0.00 71464.72 7184.69 100973.99 00:23:54.026 =================================================================================================================== 00:23:54.026 Total : 1748.12 6.83 0.00 0.00 71464.72 7184.69 100973.99 00:23:54.026 0 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:54.026 nvmf_trace.0 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1187875 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1187875 ']' 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1187875 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1187875 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1187875' 00:23:54.026 killing process with pid 1187875 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1187875 00:23:54.026 Received shutdown signal, test time was about 1.000000 seconds 00:23:54.026 00:23:54.026 Latency(us) 00:23:54.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.026 =================================================================================================================== 00:23:54.026 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.026 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1187875 00:23:54.285 01:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:54.285 01:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:54.285 01:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:54.285 01:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:54.285 01:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:54.285 01:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:54.285 01:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:54.285 rmmod nvme_tcp 00:23:54.285 rmmod nvme_fabrics 00:23:54.286 rmmod nvme_keyring 00:23:54.286 01:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:54.286 01:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:54.286 01:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:54.286 01:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1187726 ']' 00:23:54.286 01:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1187726 00:23:54.286 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1187726 ']' 00:23:54.286 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1187726 00:23:54.286 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:54.286 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:54.286 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1187726 00:23:54.543 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:54.543 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:54.543 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1187726' 00:23:54.543 killing process with pid 1187726 00:23:54.543 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1187726 00:23:54.543 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1187726 00:23:54.801 01:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:54.801 01:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:54.801 01:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:54.801 01:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:54.801 01:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:54.801 01:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.801 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.801 01:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.703 01:09:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:56.703 01:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.53CRbmoznH /tmp/tmp.9LtFdPWF03 /tmp/tmp.Nqm5Ygls3m 00:23:56.703 00:23:56.703 real 1m18.900s 00:23:56.703 user 2m5.998s 00:23:56.703 sys 0m28.165s 00:23:56.703 01:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:56.703 01:09:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.703 ************************************ 00:23:56.703 END TEST nvmf_tls 00:23:56.703 ************************************ 00:23:56.703 01:09:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:56.703 01:09:46 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:56.703 01:09:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:56.703 01:09:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:56.703 01:09:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:56.703 ************************************ 00:23:56.703 START TEST nvmf_fips 00:23:56.703 ************************************ 00:23:56.703 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:56.962 * Looking for test storage... 00:23:56.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:56.962 Error setting digest 00:23:56.962 00D2165E3D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:56.962 00D2165E3D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:56.962 01:09:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:58.862 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:58.863 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:58.863 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:58.863 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:58.863 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:58.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:23:58.863 00:23:58.863 --- 10.0.0.2 ping statistics --- 00:23:58.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.863 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:58.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:23:58.863 00:23:58.863 --- 10.0.0.1 ping statistics --- 00:23:58.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.863 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1190112 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1190112 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1190112 ']' 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.863 01:09:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:59.121 [2024-07-14 01:09:48.331611] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:59.121 [2024-07-14 01:09:48.331698] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.121 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.121 [2024-07-14 01:09:48.401651] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.121 [2024-07-14 01:09:48.490878] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.121 [2024-07-14 01:09:48.490945] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.121 [2024-07-14 01:09:48.490962] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.121 [2024-07-14 01:09:48.490975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.122 [2024-07-14 01:09:48.490987] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.122 [2024-07-14 01:09:48.491018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.054 01:09:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.054 01:09:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:00.054 01:09:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:00.054 01:09:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:00.054 01:09:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:00.054 01:09:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.054 01:09:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:00.054 01:09:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:00.054 01:09:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:00.054 01:09:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:00.054 01:09:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:00.054 01:09:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:00.054 01:09:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:00.054 01:09:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:00.312 [2024-07-14 01:09:49.541169] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.312 [2024-07-14 01:09:49.557166] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:00.312 [2024-07-14 01:09:49.557357] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.312 [2024-07-14 01:09:49.589685] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:00.312 malloc0 00:24:00.312 01:09:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:00.312 01:09:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1190266 00:24:00.312 01:09:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:00.312 01:09:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1190266 /var/tmp/bdevperf.sock 00:24:00.312 01:09:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1190266 ']' 00:24:00.312 01:09:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.312 01:09:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.312 01:09:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.312 01:09:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.312 01:09:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:00.312 [2024-07-14 01:09:49.678685] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:00.312 [2024-07-14 01:09:49.678766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190266 ] 00:24:00.312 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.569 [2024-07-14 01:09:49.741216] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.569 [2024-07-14 01:09:49.826475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.569 01:09:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.569 01:09:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:24:00.570 01:09:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:00.828 [2024-07-14 01:09:50.164588] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:00.828 [2024-07-14 01:09:50.164744] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:00.828 TLSTESTn1 00:24:01.085 01:09:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:01.086 Running I/O for 10 seconds... 00:24:11.084 00:24:11.084 Latency(us) 00:24:11.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.084 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:11.084 Verification LBA range: start 0x0 length 0x2000 00:24:11.084 TLSTESTn1 : 10.06 1831.06 7.15 0.00 0.00 69702.28 7524.50 100197.26 00:24:11.084 =================================================================================================================== 00:24:11.084 Total : 1831.06 7.15 0.00 0.00 69702.28 7524.50 100197.26 00:24:11.084 0 00:24:11.084 01:10:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:11.084 01:10:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:11.084 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:24:11.085 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:24:11.085 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:11.085 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:11.085 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:11.085 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:11.085 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:11.085 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:11.085 nvmf_trace.0 00:24:11.343 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:24:11.343 01:10:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1190266 00:24:11.343 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1190266 ']' 00:24:11.343 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1190266 00:24:11.343 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:11.343 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.343 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1190266 00:24:11.343 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:11.343 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:11.343 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1190266' 00:24:11.343 killing process with pid 1190266 00:24:11.343 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1190266 00:24:11.343 Received shutdown signal, test time was about 10.000000 seconds 00:24:11.343 00:24:11.343 Latency(us) 00:24:11.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.343 =================================================================================================================== 00:24:11.343 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:11.343 [2024-07-14 01:10:00.548059] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:11.343 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1190266 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:11.602 rmmod nvme_tcp 00:24:11.602 rmmod nvme_fabrics 00:24:11.602 rmmod nvme_keyring 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1190112 ']' 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1190112 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1190112 ']' 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1190112 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1190112 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1190112' 00:24:11.602 killing process with pid 1190112 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1190112 00:24:11.602 [2024-07-14 01:10:00.864886] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:11.602 01:10:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1190112 00:24:11.862 01:10:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:11.862 01:10:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:11.862 01:10:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:11.862 01:10:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:11.862 01:10:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:11.862 01:10:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.862 01:10:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:11.862 01:10:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.764 01:10:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:13.764 01:10:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:13.764 00:24:13.764 real 0m17.094s 00:24:13.764 user 0m21.141s 00:24:13.764 sys 0m6.422s 00:24:13.764 01:10:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:13.764 01:10:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:13.764 ************************************ 00:24:13.764 END TEST nvmf_fips 00:24:13.764 ************************************ 00:24:14.022 01:10:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:14.022 01:10:03 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:14.022 01:10:03 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:14.022 01:10:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:14.022 01:10:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:14.022 01:10:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:14.022 ************************************ 00:24:14.022 START TEST nvmf_fuzz 00:24:14.022 ************************************ 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:14.022 * Looking for test storage... 00:24:14.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:14.022 01:10:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:15.921 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.921 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:15.922 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:15.922 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:15.922 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.922 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:16.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:24:16.181 00:24:16.181 --- 10.0.0.2 ping statistics --- 00:24:16.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.181 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:24:16.181 00:24:16.181 --- 10.0.0.1 ping statistics --- 00:24:16.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.181 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1193513 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1193513 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1193513 ']' 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:16.181 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:16.440 Malloc0 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:16.440 01:10:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:48.494 Fuzzing completed. Shutting down the fuzz application 00:24:48.494 00:24:48.494 Dumping successful admin opcodes: 00:24:48.494 8, 9, 10, 24, 00:24:48.494 Dumping successful io opcodes: 00:24:48.494 0, 9, 00:24:48.494 NS: 0x200003aeff00 I/O qp, Total commands completed: 448923, total successful commands: 2611, random_seed: 3536530176 00:24:48.494 NS: 0x200003aeff00 admin qp, Total commands completed: 56016, total successful commands: 445, random_seed: 503823424 00:24:48.494 01:10:36 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:48.494 Fuzzing completed. Shutting down the fuzz application 00:24:48.494 00:24:48.494 Dumping successful admin opcodes: 00:24:48.494 24, 00:24:48.494 Dumping successful io opcodes: 00:24:48.494 00:24:48.494 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3707242168 00:24:48.494 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3707379892 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:48.494 rmmod nvme_tcp 00:24:48.494 rmmod nvme_fabrics 00:24:48.494 rmmod nvme_keyring 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1193513 ']' 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1193513 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1193513 ']' 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 1193513 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1193513 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1193513' 00:24:48.494 killing process with pid 1193513 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 1193513 00:24:48.494 01:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 1193513 00:24:48.752 01:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:48.752 01:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:48.752 01:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:48.752 01:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:48.752 01:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:48.752 01:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.752 01:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:48.752 01:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.685 01:10:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:50.685 01:10:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:50.685 00:24:50.685 real 0m36.849s 00:24:50.685 user 0m50.734s 00:24:50.685 sys 0m15.309s 00:24:50.685 01:10:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:50.685 01:10:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:50.685 ************************************ 00:24:50.685 END TEST nvmf_fuzz 00:24:50.685 ************************************ 00:24:50.947 01:10:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:50.947 01:10:40 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:50.947 01:10:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:50.947 01:10:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:50.947 01:10:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:50.947 ************************************ 00:24:50.947 START TEST nvmf_multiconnection 00:24:50.947 ************************************ 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:50.947 * Looking for test storage... 00:24:50.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.947 01:10:40 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:50.948 01:10:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:52.848 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:52.848 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:52.848 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:52.849 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:52.849 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.849 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:53.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:24:53.107 00:24:53.107 --- 10.0.0.2 ping statistics --- 00:24:53.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.107 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:24:53.107 00:24:53.107 --- 10.0.0.1 ping statistics --- 00:24:53.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.107 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1199228 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1199228 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 1199228 ']' 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.107 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.107 [2024-07-14 01:10:42.391763] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:53.107 [2024-07-14 01:10:42.391855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.107 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.107 [2024-07-14 01:10:42.461754] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:53.365 [2024-07-14 01:10:42.556900] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.365 [2024-07-14 01:10:42.556961] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.365 [2024-07-14 01:10:42.556976] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.365 [2024-07-14 01:10:42.556988] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.365 [2024-07-14 01:10:42.556999] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.365 [2024-07-14 01:10:42.559889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.365 [2024-07-14 01:10:42.559947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.365 [2024-07-14 01:10:42.559971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:53.365 [2024-07-14 01:10:42.559975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.365 [2024-07-14 01:10:42.698647] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.365 Malloc1 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.365 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.366 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.366 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:53.366 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.366 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.366 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.366 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:53.366 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.366 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.366 [2024-07-14 01:10:42.753780] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.366 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.366 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.366 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:53.366 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.366 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 Malloc2 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 Malloc3 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 Malloc4 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 Malloc5 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 Malloc6 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 Malloc7 00:24:53.624 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:53.624 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:53.624 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:53.624 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.624 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.624 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.624 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:53.625 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.625 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.883 Malloc8 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.883 Malloc9 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.883 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.884 Malloc10 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.884 Malloc11 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:53.884 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:54.450 01:10:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:54.450 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:54.450 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:54.450 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:54.450 01:10:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:56.979 01:10:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:56.979 01:10:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:56.979 01:10:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:24:56.979 01:10:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:56.979 01:10:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:56.979 01:10:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:56.979 01:10:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:56.979 01:10:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:57.236 01:10:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:57.236 01:10:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:57.236 01:10:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:57.236 01:10:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:57.236 01:10:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:59.757 01:10:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:59.757 01:10:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:59.757 01:10:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:24:59.757 01:10:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:59.757 01:10:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:59.757 01:10:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:59.757 01:10:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:59.757 01:10:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:00.016 01:10:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:00.016 01:10:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:00.016 01:10:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:00.016 01:10:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:00.016 01:10:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:01.915 01:10:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:01.915 01:10:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:01.915 01:10:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:01.915 01:10:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:01.915 01:10:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:01.915 01:10:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:01.915 01:10:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:01.915 01:10:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:02.849 01:10:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:02.849 01:10:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:02.849 01:10:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:02.849 01:10:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:02.850 01:10:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:04.743 01:10:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:04.743 01:10:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:04.743 01:10:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:25:04.743 01:10:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:04.743 01:10:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:04.743 01:10:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:04.743 01:10:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:04.743 01:10:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:05.306 01:10:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:05.306 01:10:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:05.306 01:10:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:05.306 01:10:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:05.306 01:10:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:07.832 01:10:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:07.832 01:10:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:07.832 01:10:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:07.832 01:10:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:07.832 01:10:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:07.832 01:10:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:07.832 01:10:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:07.832 01:10:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:08.089 01:10:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:08.089 01:10:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:08.089 01:10:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:08.089 01:10:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:08.089 01:10:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:10.649 01:10:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:10.649 01:10:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:10.649 01:10:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:10.649 01:10:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:10.649 01:10:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:10.649 01:10:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:10.649 01:10:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:10.649 01:10:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:10.906 01:11:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:10.906 01:11:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:10.906 01:11:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:10.906 01:11:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:10.906 01:11:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:13.431 01:11:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:13.431 01:11:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:13.431 01:11:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:13.431 01:11:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:13.431 01:11:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:13.431 01:11:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:13.431 01:11:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.431 01:11:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:13.999 01:11:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:13.999 01:11:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:13.999 01:11:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:13.999 01:11:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:13.999 01:11:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:15.894 01:11:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:15.894 01:11:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:15.894 01:11:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:15.894 01:11:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:15.894 01:11:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:15.894 01:11:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:15.894 01:11:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:15.894 01:11:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:16.827 01:11:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:16.827 01:11:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:16.827 01:11:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:16.827 01:11:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:16.827 01:11:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:18.725 01:11:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:18.725 01:11:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:18.725 01:11:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:18.725 01:11:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:18.725 01:11:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:18.725 01:11:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:18.725 01:11:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.725 01:11:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:19.659 01:11:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:19.659 01:11:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:19.659 01:11:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:19.659 01:11:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:19.659 01:11:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:22.184 01:11:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:22.184 01:11:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:22.184 01:11:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:22.184 01:11:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:22.184 01:11:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:22.184 01:11:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:22.184 01:11:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.184 01:11:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:22.749 01:11:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:22.749 01:11:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:22.749 01:11:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:22.749 01:11:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:22.749 01:11:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:24.646 01:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:24.646 01:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:24.646 01:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:24.646 01:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:24.646 01:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:24.646 01:11:13 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:24.646 01:11:13 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:24.646 [global] 00:25:24.646 thread=1 00:25:24.646 invalidate=1 00:25:24.646 rw=read 00:25:24.646 time_based=1 00:25:24.646 runtime=10 00:25:24.646 ioengine=libaio 00:25:24.646 direct=1 00:25:24.646 bs=262144 00:25:24.646 iodepth=64 00:25:24.646 norandommap=1 00:25:24.646 numjobs=1 00:25:24.646 00:25:24.646 [job0] 00:25:24.646 filename=/dev/nvme0n1 00:25:24.646 [job1] 00:25:24.646 filename=/dev/nvme10n1 00:25:24.646 [job2] 00:25:24.646 filename=/dev/nvme1n1 00:25:24.646 [job3] 00:25:24.646 filename=/dev/nvme2n1 00:25:24.646 [job4] 00:25:24.646 filename=/dev/nvme3n1 00:25:24.646 [job5] 00:25:24.646 filename=/dev/nvme4n1 00:25:24.646 [job6] 00:25:24.646 filename=/dev/nvme5n1 00:25:24.646 [job7] 00:25:24.646 filename=/dev/nvme6n1 00:25:24.646 [job8] 00:25:24.646 filename=/dev/nvme7n1 00:25:24.646 [job9] 00:25:24.646 filename=/dev/nvme8n1 00:25:24.646 [job10] 00:25:24.646 filename=/dev/nvme9n1 00:25:24.903 Could not set queue depth (nvme0n1) 00:25:24.903 Could not set queue depth (nvme10n1) 00:25:24.903 Could not set queue depth (nvme1n1) 00:25:24.903 Could not set queue depth (nvme2n1) 00:25:24.903 Could not set queue depth (nvme3n1) 00:25:24.903 Could not set queue depth (nvme4n1) 00:25:24.903 Could not set queue depth (nvme5n1) 00:25:24.903 Could not set queue depth (nvme6n1) 00:25:24.903 Could not set queue depth (nvme7n1) 00:25:24.903 Could not set queue depth (nvme8n1) 00:25:24.903 Could not set queue depth (nvme9n1) 00:25:24.903 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.904 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.904 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.904 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.904 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.904 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.904 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.904 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.904 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.904 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.904 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:24.904 fio-3.35 00:25:24.904 Starting 11 threads 00:25:37.176 00:25:37.177 job0: (groupid=0, jobs=1): err= 0: pid=1203491: Sun Jul 14 01:11:24 2024 00:25:37.177 read: IOPS=772, BW=193MiB/s (202MB/s)(1937MiB/10028msec) 00:25:37.177 slat (usec): min=9, max=121241, avg=906.52, stdev=4077.88 00:25:37.177 clat (usec): min=1261, max=385158, avg=81883.57, stdev=49083.35 00:25:37.177 lat (usec): min=1286, max=385178, avg=82790.08, stdev=49471.18 00:25:37.177 clat percentiles (msec): 00:25:37.177 | 1.00th=[ 5], 5.00th=[ 22], 10.00th=[ 39], 20.00th=[ 46], 00:25:37.177 | 30.00th=[ 55], 40.00th=[ 65], 50.00th=[ 75], 60.00th=[ 85], 00:25:37.177 | 70.00th=[ 95], 80.00th=[ 114], 90.00th=[ 134], 95.00th=[ 155], 00:25:37.177 | 99.00th=[ 296], 99.50th=[ 363], 99.90th=[ 380], 99.95th=[ 384], 00:25:37.177 | 99.99th=[ 384] 00:25:37.177 bw ( KiB/s): min=84480, max=372224, per=10.06%, avg=196698.85, stdev=74288.02, samples=20 00:25:37.177 iops : min= 330, max= 1454, avg=768.35, stdev=290.19, samples=20 00:25:37.177 lat (msec) : 2=0.10%, 4=0.77%, 10=2.40%, 20=1.45%, 50=20.81% 00:25:37.177 lat (msec) : 100=48.62%, 250=23.90%, 500=1.95% 00:25:37.177 cpu : usr=0.42%, sys=2.20%, ctx=2020, majf=0, minf=4097 00:25:37.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:37.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.177 issued rwts: total=7746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.177 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.177 job1: (groupid=0, jobs=1): err= 0: pid=1203492: Sun Jul 14 01:11:24 2024 00:25:37.177 read: IOPS=634, BW=159MiB/s (166MB/s)(1599MiB/10081msec) 00:25:37.177 slat (usec): min=11, max=47163, avg=1418.21, stdev=3778.71 00:25:37.177 clat (msec): min=15, max=220, avg=99.40, stdev=30.55 00:25:37.177 lat (msec): min=15, max=224, avg=100.82, stdev=30.94 00:25:37.177 clat percentiles (msec): 00:25:37.177 | 1.00th=[ 41], 5.00th=[ 54], 10.00th=[ 63], 20.00th=[ 72], 00:25:37.177 | 30.00th=[ 81], 40.00th=[ 91], 50.00th=[ 100], 60.00th=[ 108], 00:25:37.177 | 70.00th=[ 115], 80.00th=[ 124], 90.00th=[ 138], 95.00th=[ 148], 00:25:37.177 | 99.00th=[ 188], 99.50th=[ 197], 99.90th=[ 213], 99.95th=[ 215], 00:25:37.177 | 99.99th=[ 222] 00:25:37.177 bw ( KiB/s): min=95232, max=235520, per=8.29%, avg=162099.20, stdev=37682.94, samples=20 00:25:37.177 iops : min= 372, max= 920, avg=633.20, stdev=147.20, samples=20 00:25:37.177 lat (msec) : 20=0.06%, 50=3.19%, 100=48.01%, 250=48.74% 00:25:37.177 cpu : usr=0.32%, sys=2.19%, ctx=1449, majf=0, minf=4097 00:25:37.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:37.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.177 issued rwts: total=6395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.177 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.177 job2: (groupid=0, jobs=1): err= 0: pid=1203493: Sun Jul 14 01:11:24 2024 00:25:37.177 read: IOPS=611, BW=153MiB/s (160MB/s)(1545MiB/10103msec) 00:25:37.177 slat (usec): min=9, max=82618, avg=871.65, stdev=3507.60 00:25:37.177 clat (msec): min=8, max=302, avg=103.67, stdev=34.34 00:25:37.177 lat (msec): min=10, max=302, avg=104.54, stdev=34.66 00:25:37.177 clat percentiles (msec): 00:25:37.177 | 1.00th=[ 30], 5.00th=[ 50], 10.00th=[ 63], 20.00th=[ 81], 00:25:37.177 | 30.00th=[ 88], 40.00th=[ 95], 50.00th=[ 102], 60.00th=[ 109], 00:25:37.177 | 70.00th=[ 118], 80.00th=[ 130], 90.00th=[ 144], 95.00th=[ 157], 00:25:37.177 | 99.00th=[ 203], 99.50th=[ 247], 99.90th=[ 275], 99.95th=[ 300], 00:25:37.177 | 99.99th=[ 305] 00:25:37.177 bw ( KiB/s): min=97280, max=211879, per=8.01%, avg=156641.95, stdev=31483.76, samples=20 00:25:37.177 iops : min= 380, max= 827, avg=611.85, stdev=122.92, samples=20 00:25:37.177 lat (msec) : 10=0.02%, 20=0.50%, 50=4.82%, 100=42.44%, 250=51.74% 00:25:37.177 lat (msec) : 500=0.49% 00:25:37.177 cpu : usr=0.31%, sys=1.69%, ctx=1875, majf=0, minf=4097 00:25:37.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:37.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.177 issued rwts: total=6181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.177 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.177 job3: (groupid=0, jobs=1): err= 0: pid=1203494: Sun Jul 14 01:11:24 2024 00:25:37.177 read: IOPS=808, BW=202MiB/s (212MB/s)(2042MiB/10107msec) 00:25:37.177 slat (usec): min=13, max=43539, avg=1092.52, stdev=3121.84 00:25:37.177 clat (msec): min=4, max=234, avg=78.04, stdev=33.14 00:25:37.177 lat (msec): min=4, max=251, avg=79.13, stdev=33.57 00:25:37.177 clat percentiles (msec): 00:25:37.177 | 1.00th=[ 31], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 50], 00:25:37.177 | 30.00th=[ 56], 40.00th=[ 62], 50.00th=[ 71], 60.00th=[ 82], 00:25:37.177 | 70.00th=[ 94], 80.00th=[ 109], 90.00th=[ 125], 95.00th=[ 138], 00:25:37.177 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 226], 99.95th=[ 234], 00:25:37.177 | 99.99th=[ 234] 00:25:37.177 bw ( KiB/s): min=98816, max=336384, per=10.61%, avg=207488.00, stdev=69646.04, samples=20 00:25:37.177 iops : min= 386, max= 1314, avg=810.50, stdev=272.05, samples=20 00:25:37.177 lat (msec) : 10=0.16%, 20=0.13%, 50=21.07%, 100=52.53%, 250=26.10% 00:25:37.177 cpu : usr=0.51%, sys=2.65%, ctx=1839, majf=0, minf=4097 00:25:37.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:37.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.177 issued rwts: total=8168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.177 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.177 job4: (groupid=0, jobs=1): err= 0: pid=1203495: Sun Jul 14 01:11:24 2024 00:25:37.177 read: IOPS=725, BW=181MiB/s (190MB/s)(1829MiB/10080msec) 00:25:37.177 slat (usec): min=9, max=85751, avg=797.04, stdev=3378.44 00:25:37.177 clat (usec): min=1796, max=225771, avg=87329.41, stdev=42608.73 00:25:37.177 lat (usec): min=1831, max=230926, avg=88126.46, stdev=43020.74 00:25:37.177 clat percentiles (msec): 00:25:37.177 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 28], 20.00th=[ 48], 00:25:37.177 | 30.00th=[ 62], 40.00th=[ 78], 50.00th=[ 89], 60.00th=[ 101], 00:25:37.177 | 70.00th=[ 111], 80.00th=[ 124], 90.00th=[ 140], 95.00th=[ 153], 00:25:37.177 | 99.00th=[ 197], 99.50th=[ 220], 99.90th=[ 224], 99.95th=[ 226], 00:25:37.177 | 99.99th=[ 226] 00:25:37.177 bw ( KiB/s): min=138240, max=283648, per=9.49%, avg=185651.20, stdev=40923.40, samples=20 00:25:37.177 iops : min= 540, max= 1108, avg=725.20, stdev=159.86, samples=20 00:25:37.177 lat (msec) : 2=0.04%, 4=0.26%, 10=1.16%, 20=4.43%, 50=16.56% 00:25:37.177 lat (msec) : 100=37.39%, 250=40.16% 00:25:37.177 cpu : usr=0.30%, sys=1.99%, ctx=2172, majf=0, minf=4097 00:25:37.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:37.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.177 issued rwts: total=7315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.177 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.177 job5: (groupid=0, jobs=1): err= 0: pid=1203496: Sun Jul 14 01:11:24 2024 00:25:37.177 read: IOPS=693, BW=173MiB/s (182MB/s)(1753MiB/10114msec) 00:25:37.177 slat (usec): min=9, max=151255, avg=1063.05, stdev=4022.63 00:25:37.177 clat (msec): min=5, max=309, avg=91.21, stdev=35.09 00:25:37.177 lat (msec): min=8, max=309, avg=92.27, stdev=35.50 00:25:37.177 clat percentiles (msec): 00:25:37.177 | 1.00th=[ 15], 5.00th=[ 34], 10.00th=[ 51], 20.00th=[ 64], 00:25:37.177 | 30.00th=[ 73], 40.00th=[ 83], 50.00th=[ 90], 60.00th=[ 97], 00:25:37.177 | 70.00th=[ 106], 80.00th=[ 116], 90.00th=[ 134], 95.00th=[ 155], 00:25:37.177 | 99.00th=[ 197], 99.50th=[ 211], 99.90th=[ 220], 99.95th=[ 220], 00:25:37.177 | 99.99th=[ 309] 00:25:37.177 bw ( KiB/s): min=114688, max=240640, per=9.09%, avg=177817.60, stdev=37028.72, samples=20 00:25:37.177 iops : min= 448, max= 940, avg=694.60, stdev=144.64, samples=20 00:25:37.177 lat (msec) : 10=0.17%, 20=2.08%, 50=7.46%, 100=54.14%, 250=36.13% 00:25:37.177 lat (msec) : 500=0.01% 00:25:37.177 cpu : usr=0.32%, sys=2.17%, ctx=1748, majf=0, minf=3721 00:25:37.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:37.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.177 issued rwts: total=7010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.177 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.177 job6: (groupid=0, jobs=1): err= 0: pid=1203498: Sun Jul 14 01:11:24 2024 00:25:37.177 read: IOPS=826, BW=207MiB/s (217MB/s)(2088MiB/10106msec) 00:25:37.177 slat (usec): min=9, max=96764, avg=943.80, stdev=3299.26 00:25:37.177 clat (msec): min=5, max=228, avg=76.43, stdev=35.42 00:25:37.177 lat (msec): min=6, max=228, avg=77.37, stdev=35.92 00:25:37.177 clat percentiles (msec): 00:25:37.177 | 1.00th=[ 15], 5.00th=[ 30], 10.00th=[ 40], 20.00th=[ 44], 00:25:37.177 | 30.00th=[ 50], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 83], 00:25:37.177 | 70.00th=[ 94], 80.00th=[ 107], 90.00th=[ 127], 95.00th=[ 142], 00:25:37.177 | 99.00th=[ 171], 99.50th=[ 184], 99.90th=[ 190], 99.95th=[ 197], 00:25:37.177 | 99.99th=[ 228] 00:25:37.177 bw ( KiB/s): min=111616, max=364032, per=10.85%, avg=212224.00, stdev=73932.91, samples=20 00:25:37.177 iops : min= 436, max= 1422, avg=829.00, stdev=288.80, samples=20 00:25:37.177 lat (msec) : 10=0.57%, 20=1.54%, 50=29.01%, 100=44.37%, 250=24.51% 00:25:37.177 cpu : usr=0.48%, sys=2.45%, ctx=2072, majf=0, minf=4097 00:25:37.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:37.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.177 issued rwts: total=8353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.177 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.177 job7: (groupid=0, jobs=1): err= 0: pid=1203499: Sun Jul 14 01:11:24 2024 00:25:37.177 read: IOPS=700, BW=175MiB/s (184MB/s)(1770MiB/10115msec) 00:25:37.177 slat (usec): min=13, max=91263, avg=1319.91, stdev=3921.51 00:25:37.177 clat (msec): min=4, max=241, avg=90.03, stdev=29.60 00:25:37.177 lat (msec): min=4, max=244, avg=91.35, stdev=30.09 00:25:37.177 clat percentiles (msec): 00:25:37.177 | 1.00th=[ 28], 5.00th=[ 43], 10.00th=[ 51], 20.00th=[ 67], 00:25:37.177 | 30.00th=[ 75], 40.00th=[ 83], 50.00th=[ 89], 60.00th=[ 96], 00:25:37.177 | 70.00th=[ 105], 80.00th=[ 113], 90.00th=[ 127], 95.00th=[ 140], 00:25:37.177 | 99.00th=[ 176], 99.50th=[ 188], 99.90th=[ 220], 99.95th=[ 220], 00:25:37.177 | 99.99th=[ 243] 00:25:37.177 bw ( KiB/s): min=104960, max=327680, per=9.19%, avg=179660.80, stdev=43678.35, samples=20 00:25:37.177 iops : min= 410, max= 1280, avg=701.80, stdev=170.62, samples=20 00:25:37.177 lat (msec) : 10=0.17%, 20=0.32%, 50=9.48%, 100=55.51%, 250=34.51% 00:25:37.177 cpu : usr=0.48%, sys=2.30%, ctx=1570, majf=0, minf=4097 00:25:37.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:37.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.178 issued rwts: total=7081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.178 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.178 job8: (groupid=0, jobs=1): err= 0: pid=1203500: Sun Jul 14 01:11:24 2024 00:25:37.178 read: IOPS=596, BW=149MiB/s (156MB/s)(1504MiB/10082msec) 00:25:37.178 slat (usec): min=13, max=86982, avg=1469.54, stdev=4221.35 00:25:37.178 clat (msec): min=4, max=246, avg=105.74, stdev=33.10 00:25:37.178 lat (msec): min=4, max=246, avg=107.20, stdev=33.61 00:25:37.178 clat percentiles (msec): 00:25:37.178 | 1.00th=[ 18], 5.00th=[ 50], 10.00th=[ 69], 20.00th=[ 83], 00:25:37.178 | 30.00th=[ 91], 40.00th=[ 99], 50.00th=[ 107], 60.00th=[ 114], 00:25:37.178 | 70.00th=[ 122], 80.00th=[ 130], 90.00th=[ 144], 95.00th=[ 161], 00:25:37.178 | 99.00th=[ 192], 99.50th=[ 199], 99.90th=[ 209], 99.95th=[ 209], 00:25:37.178 | 99.99th=[ 247] 00:25:37.178 bw ( KiB/s): min=109568, max=212992, per=7.79%, avg=152362.15, stdev=27610.61, samples=20 00:25:37.178 iops : min= 428, max= 832, avg=595.15, stdev=107.85, samples=20 00:25:37.178 lat (msec) : 10=0.17%, 20=1.31%, 50=3.64%, 100=37.25%, 250=57.63% 00:25:37.178 cpu : usr=0.42%, sys=2.10%, ctx=1437, majf=0, minf=4097 00:25:37.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:37.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.178 issued rwts: total=6014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.178 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.178 job9: (groupid=0, jobs=1): err= 0: pid=1203501: Sun Jul 14 01:11:24 2024 00:25:37.178 read: IOPS=647, BW=162MiB/s (170MB/s)(1630MiB/10078msec) 00:25:37.178 slat (usec): min=10, max=73210, avg=1285.66, stdev=3873.90 00:25:37.178 clat (msec): min=4, max=224, avg=97.53, stdev=34.33 00:25:37.178 lat (msec): min=4, max=224, avg=98.82, stdev=34.85 00:25:37.178 clat percentiles (msec): 00:25:37.178 | 1.00th=[ 12], 5.00th=[ 41], 10.00th=[ 54], 20.00th=[ 72], 00:25:37.178 | 30.00th=[ 83], 40.00th=[ 90], 50.00th=[ 96], 60.00th=[ 104], 00:25:37.178 | 70.00th=[ 114], 80.00th=[ 127], 90.00th=[ 142], 95.00th=[ 157], 00:25:37.178 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 203], 99.95th=[ 215], 00:25:37.178 | 99.99th=[ 226] 00:25:37.178 bw ( KiB/s): min=113152, max=288256, per=8.45%, avg=165324.80, stdev=37637.69, samples=20 00:25:37.178 iops : min= 442, max= 1126, avg=645.80, stdev=147.02, samples=20 00:25:37.178 lat (msec) : 10=0.78%, 20=1.10%, 50=6.87%, 100=47.00%, 250=44.24% 00:25:37.178 cpu : usr=0.45%, sys=2.16%, ctx=1650, majf=0, minf=4097 00:25:37.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:37.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.178 issued rwts: total=6521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.178 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.178 job10: (groupid=0, jobs=1): err= 0: pid=1203502: Sun Jul 14 01:11:24 2024 00:25:37.178 read: IOPS=648, BW=162MiB/s (170MB/s)(1625MiB/10032msec) 00:25:37.178 slat (usec): min=9, max=106969, avg=886.44, stdev=4197.56 00:25:37.178 clat (usec): min=999, max=340343, avg=97816.40, stdev=51572.79 00:25:37.178 lat (usec): min=1016, max=340358, avg=98702.84, stdev=51905.34 00:25:37.178 clat percentiles (msec): 00:25:37.178 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 31], 20.00th=[ 56], 00:25:37.178 | 30.00th=[ 78], 40.00th=[ 89], 50.00th=[ 99], 60.00th=[ 108], 00:25:37.178 | 70.00th=[ 120], 80.00th=[ 131], 90.00th=[ 153], 95.00th=[ 171], 00:25:37.178 | 99.00th=[ 284], 99.50th=[ 284], 99.90th=[ 334], 99.95th=[ 334], 00:25:37.178 | 99.99th=[ 342] 00:25:37.178 bw ( KiB/s): min=112128, max=275456, per=8.43%, avg=164831.25, stdev=45361.98, samples=20 00:25:37.178 iops : min= 438, max= 1076, avg=643.85, stdev=177.19, samples=20 00:25:37.178 lat (usec) : 1000=0.02% 00:25:37.178 lat (msec) : 2=0.23%, 4=1.40%, 10=3.60%, 20=1.63%, 50=10.91% 00:25:37.178 lat (msec) : 100=34.04%, 250=45.82%, 500=2.35% 00:25:37.178 cpu : usr=0.31%, sys=1.76%, ctx=1981, majf=0, minf=4097 00:25:37.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:37.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:37.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:37.178 issued rwts: total=6501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:37.178 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:37.178 00:25:37.178 Run status group 0 (all jobs): 00:25:37.178 READ: bw=1910MiB/s (2003MB/s), 149MiB/s-207MiB/s (156MB/s-217MB/s), io=18.9GiB (20.3GB), run=10028-10115msec 00:25:37.178 00:25:37.178 Disk stats (read/write): 00:25:37.178 nvme0n1: ios=15248/0, merge=0/0, ticks=1241691/0, in_queue=1241691, util=97.19% 00:25:37.178 nvme10n1: ios=12626/0, merge=0/0, ticks=1231693/0, in_queue=1231693, util=97.42% 00:25:37.178 nvme1n1: ios=12132/0, merge=0/0, ticks=1240742/0, in_queue=1240742, util=97.68% 00:25:37.178 nvme2n1: ios=16126/0, merge=0/0, ticks=1227392/0, in_queue=1227392, util=97.84% 00:25:37.178 nvme3n1: ios=14421/0, merge=0/0, ticks=1238048/0, in_queue=1238048, util=97.91% 00:25:37.178 nvme4n1: ios=13817/0, merge=0/0, ticks=1236870/0, in_queue=1236870, util=98.26% 00:25:37.178 nvme5n1: ios=16507/0, merge=0/0, ticks=1233987/0, in_queue=1233987, util=98.42% 00:25:37.178 nvme6n1: ios=13972/0, merge=0/0, ticks=1230788/0, in_queue=1230788, util=98.54% 00:25:37.178 nvme7n1: ios=11852/0, merge=0/0, ticks=1230093/0, in_queue=1230093, util=98.93% 00:25:37.178 nvme8n1: ios=12814/0, merge=0/0, ticks=1228538/0, in_queue=1228538, util=99.10% 00:25:37.178 nvme9n1: ios=12707/0, merge=0/0, ticks=1246655/0, in_queue=1246655, util=99.22% 00:25:37.178 01:11:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:37.178 [global] 00:25:37.178 thread=1 00:25:37.178 invalidate=1 00:25:37.178 rw=randwrite 00:25:37.178 time_based=1 00:25:37.178 runtime=10 00:25:37.178 ioengine=libaio 00:25:37.178 direct=1 00:25:37.178 bs=262144 00:25:37.178 iodepth=64 00:25:37.178 norandommap=1 00:25:37.178 numjobs=1 00:25:37.178 00:25:37.178 [job0] 00:25:37.178 filename=/dev/nvme0n1 00:25:37.178 [job1] 00:25:37.178 filename=/dev/nvme10n1 00:25:37.178 [job2] 00:25:37.178 filename=/dev/nvme1n1 00:25:37.178 [job3] 00:25:37.178 filename=/dev/nvme2n1 00:25:37.178 [job4] 00:25:37.178 filename=/dev/nvme3n1 00:25:37.178 [job5] 00:25:37.178 filename=/dev/nvme4n1 00:25:37.178 [job6] 00:25:37.178 filename=/dev/nvme5n1 00:25:37.178 [job7] 00:25:37.178 filename=/dev/nvme6n1 00:25:37.178 [job8] 00:25:37.178 filename=/dev/nvme7n1 00:25:37.178 [job9] 00:25:37.178 filename=/dev/nvme8n1 00:25:37.178 [job10] 00:25:37.178 filename=/dev/nvme9n1 00:25:37.178 Could not set queue depth (nvme0n1) 00:25:37.178 Could not set queue depth (nvme10n1) 00:25:37.178 Could not set queue depth (nvme1n1) 00:25:37.178 Could not set queue depth (nvme2n1) 00:25:37.178 Could not set queue depth (nvme3n1) 00:25:37.178 Could not set queue depth (nvme4n1) 00:25:37.178 Could not set queue depth (nvme5n1) 00:25:37.178 Could not set queue depth (nvme6n1) 00:25:37.178 Could not set queue depth (nvme7n1) 00:25:37.178 Could not set queue depth (nvme8n1) 00:25:37.178 Could not set queue depth (nvme9n1) 00:25:37.178 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.178 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.178 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.178 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.178 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.178 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.178 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.178 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.178 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.178 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.178 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:37.178 fio-3.35 00:25:37.178 Starting 11 threads 00:25:47.181 00:25:47.181 job0: (groupid=0, jobs=1): err= 0: pid=1204673: Sun Jul 14 01:11:35 2024 00:25:47.181 write: IOPS=310, BW=77.5MiB/s (81.3MB/s)(789MiB/10175msec); 0 zone resets 00:25:47.181 slat (usec): min=17, max=196149, avg=2454.32, stdev=7897.85 00:25:47.181 clat (msec): min=2, max=756, avg=203.84, stdev=150.90 00:25:47.181 lat (msec): min=3, max=756, avg=206.30, stdev=152.94 00:25:47.181 clat percentiles (msec): 00:25:47.181 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 44], 20.00th=[ 106], 00:25:47.181 | 30.00th=[ 129], 40.00th=[ 146], 50.00th=[ 165], 60.00th=[ 194], 00:25:47.181 | 70.00th=[ 218], 80.00th=[ 292], 90.00th=[ 384], 95.00th=[ 575], 00:25:47.181 | 99.00th=[ 693], 99.50th=[ 709], 99.90th=[ 726], 99.95th=[ 760], 00:25:47.181 | 99.99th=[ 760] 00:25:47.181 bw ( KiB/s): min=22528, max=131584, per=6.71%, avg=79162.35, stdev=35413.83, samples=20 00:25:47.181 iops : min= 88, max= 514, avg=309.15, stdev=138.30, samples=20 00:25:47.181 lat (msec) : 4=0.41%, 10=1.62%, 20=3.14%, 50=6.15%, 100=7.13% 00:25:47.181 lat (msec) : 250=57.46%, 500=16.64%, 750=7.35%, 1000=0.10% 00:25:47.181 cpu : usr=0.89%, sys=1.13%, ctx=1615, majf=0, minf=1 00:25:47.181 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:25:47.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.181 issued rwts: total=0,3155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.181 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.181 job1: (groupid=0, jobs=1): err= 0: pid=1204685: Sun Jul 14 01:11:35 2024 00:25:47.182 write: IOPS=345, BW=86.3MiB/s (90.5MB/s)(871MiB/10090msec); 0 zone resets 00:25:47.182 slat (usec): min=25, max=191528, avg=2517.80, stdev=9607.58 00:25:47.182 clat (msec): min=3, max=1002, avg=182.71, stdev=177.42 00:25:47.182 lat (msec): min=3, max=1002, avg=185.23, stdev=179.95 00:25:47.182 clat percentiles (msec): 00:25:47.182 | 1.00th=[ 15], 5.00th=[ 42], 10.00th=[ 71], 20.00th=[ 84], 00:25:47.182 | 30.00th=[ 96], 40.00th=[ 113], 50.00th=[ 124], 60.00th=[ 134], 00:25:47.182 | 70.00th=[ 155], 80.00th=[ 186], 90.00th=[ 460], 95.00th=[ 625], 00:25:47.182 | 99.00th=[ 911], 99.50th=[ 919], 99.90th=[ 978], 99.95th=[ 1003], 00:25:47.182 | 99.99th=[ 1003] 00:25:47.182 bw ( KiB/s): min=12288, max=183296, per=7.43%, avg=87591.75, stdev=60091.15, samples=20 00:25:47.182 iops : min= 48, max= 716, avg=342.15, stdev=234.73, samples=20 00:25:47.182 lat (msec) : 4=0.03%, 10=0.49%, 20=1.21%, 50=4.79%, 100=25.51% 00:25:47.182 lat (msec) : 250=50.42%, 500=9.84%, 750=5.91%, 1000=1.72%, 2000=0.09% 00:25:47.182 cpu : usr=1.05%, sys=1.13%, ctx=1511, majf=0, minf=1 00:25:47.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:25:47.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.182 issued rwts: total=0,3485,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.182 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.182 job2: (groupid=0, jobs=1): err= 0: pid=1204686: Sun Jul 14 01:11:35 2024 00:25:47.182 write: IOPS=277, BW=69.3MiB/s (72.7MB/s)(705MiB/10169msec); 0 zone resets 00:25:47.182 slat (usec): min=25, max=201730, avg=3388.63, stdev=10450.34 00:25:47.182 clat (msec): min=5, max=803, avg=227.37, stdev=174.67 00:25:47.182 lat (msec): min=5, max=803, avg=230.76, stdev=177.06 00:25:47.182 clat percentiles (msec): 00:25:47.182 | 1.00th=[ 35], 5.00th=[ 54], 10.00th=[ 58], 20.00th=[ 74], 00:25:47.182 | 30.00th=[ 95], 40.00th=[ 150], 50.00th=[ 201], 60.00th=[ 226], 00:25:47.182 | 70.00th=[ 271], 80.00th=[ 326], 90.00th=[ 460], 95.00th=[ 667], 00:25:47.182 | 99.00th=[ 760], 99.50th=[ 785], 99.90th=[ 802], 99.95th=[ 802], 00:25:47.182 | 99.99th=[ 802] 00:25:47.182 bw ( KiB/s): min=18432, max=259584, per=5.98%, avg=70550.10, stdev=56997.24, samples=20 00:25:47.182 iops : min= 72, max= 1014, avg=275.55, stdev=222.67, samples=20 00:25:47.182 lat (msec) : 10=0.11%, 20=0.28%, 50=1.49%, 100=29.69%, 250=34.87% 00:25:47.182 lat (msec) : 500=25.36%, 750=6.99%, 1000=1.21% 00:25:47.182 cpu : usr=0.84%, sys=0.78%, ctx=927, majf=0, minf=1 00:25:47.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:25:47.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.182 issued rwts: total=0,2819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.182 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.182 job3: (groupid=0, jobs=1): err= 0: pid=1204687: Sun Jul 14 01:11:35 2024 00:25:47.182 write: IOPS=438, BW=110MiB/s (115MB/s)(1116MiB/10179msec); 0 zone resets 00:25:47.182 slat (usec): min=18, max=239979, avg=1975.47, stdev=7283.26 00:25:47.182 clat (msec): min=2, max=424, avg=143.88, stdev=70.73 00:25:47.182 lat (msec): min=3, max=424, avg=145.86, stdev=71.48 00:25:47.182 clat percentiles (msec): 00:25:47.182 | 1.00th=[ 13], 5.00th=[ 34], 10.00th=[ 64], 20.00th=[ 87], 00:25:47.182 | 30.00th=[ 104], 40.00th=[ 124], 50.00th=[ 140], 60.00th=[ 150], 00:25:47.182 | 70.00th=[ 176], 80.00th=[ 197], 90.00th=[ 236], 95.00th=[ 262], 00:25:47.182 | 99.00th=[ 384], 99.50th=[ 401], 99.90th=[ 414], 99.95th=[ 418], 00:25:47.182 | 99.99th=[ 426] 00:25:47.182 bw ( KiB/s): min=54272, max=184320, per=9.55%, avg=112631.95, stdev=38077.06, samples=20 00:25:47.182 iops : min= 212, max= 720, avg=439.90, stdev=148.78, samples=20 00:25:47.182 lat (msec) : 4=0.07%, 10=0.58%, 20=2.04%, 50=5.47%, 100=20.46% 00:25:47.182 lat (msec) : 250=64.12%, 500=7.26% 00:25:47.182 cpu : usr=1.20%, sys=1.33%, ctx=1718, majf=0, minf=1 00:25:47.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:47.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.182 issued rwts: total=0,4462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.182 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.182 job4: (groupid=0, jobs=1): err= 0: pid=1204688: Sun Jul 14 01:11:35 2024 00:25:47.182 write: IOPS=469, BW=117MiB/s (123MB/s)(1186MiB/10104msec); 0 zone resets 00:25:47.182 slat (usec): min=16, max=358353, avg=1317.96, stdev=8487.89 00:25:47.182 clat (usec): min=1263, max=940722, avg=135001.05, stdev=137681.98 00:25:47.182 lat (usec): min=1291, max=1027.8k, avg=136319.01, stdev=139055.13 00:25:47.182 clat percentiles (msec): 00:25:47.182 | 1.00th=[ 3], 5.00th=[ 11], 10.00th=[ 20], 20.00th=[ 45], 00:25:47.182 | 30.00th=[ 65], 40.00th=[ 87], 50.00th=[ 110], 60.00th=[ 128], 00:25:47.182 | 70.00th=[ 150], 80.00th=[ 174], 90.00th=[ 241], 95.00th=[ 368], 00:25:47.182 | 99.00th=[ 709], 99.50th=[ 802], 99.90th=[ 927], 99.95th=[ 927], 00:25:47.182 | 99.99th=[ 944] 00:25:47.182 bw ( KiB/s): min=17408, max=265216, per=10.15%, avg=119759.25, stdev=69142.62, samples=20 00:25:47.182 iops : min= 68, max= 1036, avg=467.75, stdev=270.10, samples=20 00:25:47.182 lat (msec) : 2=0.38%, 4=1.64%, 10=2.66%, 20=5.38%, 50=13.50% 00:25:47.182 lat (msec) : 100=21.28%, 250=45.78%, 500=5.38%, 750=3.40%, 1000=0.61% 00:25:47.182 cpu : usr=1.41%, sys=1.65%, ctx=3021, majf=0, minf=1 00:25:47.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:47.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.182 issued rwts: total=0,4742,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.182 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.182 job5: (groupid=0, jobs=1): err= 0: pid=1204689: Sun Jul 14 01:11:35 2024 00:25:47.182 write: IOPS=456, BW=114MiB/s (120MB/s)(1161MiB/10182msec); 0 zone resets 00:25:47.182 slat (usec): min=18, max=52448, avg=1746.14, stdev=4215.53 00:25:47.182 clat (usec): min=1529, max=462620, avg=138496.15, stdev=74449.21 00:25:47.182 lat (msec): min=2, max=462, avg=140.24, stdev=75.26 00:25:47.182 clat percentiles (msec): 00:25:47.182 | 1.00th=[ 8], 5.00th=[ 17], 10.00th=[ 45], 20.00th=[ 83], 00:25:47.182 | 30.00th=[ 97], 40.00th=[ 121], 50.00th=[ 134], 60.00th=[ 144], 00:25:47.182 | 70.00th=[ 167], 80.00th=[ 188], 90.00th=[ 239], 95.00th=[ 266], 00:25:47.182 | 99.00th=[ 368], 99.50th=[ 409], 99.90th=[ 451], 99.95th=[ 451], 00:25:47.182 | 99.99th=[ 464] 00:25:47.182 bw ( KiB/s): min=53248, max=190976, per=9.94%, avg=117281.65, stdev=41868.63, samples=20 00:25:47.182 iops : min= 208, max= 746, avg=458.10, stdev=163.57, samples=20 00:25:47.182 lat (msec) : 2=0.04%, 4=0.19%, 10=1.53%, 20=3.83%, 50=5.17% 00:25:47.182 lat (msec) : 100=20.61%, 250=60.68%, 500=7.95% 00:25:47.182 cpu : usr=1.37%, sys=1.38%, ctx=2129, majf=0, minf=1 00:25:47.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:47.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.182 issued rwts: total=0,4644,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.182 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.182 job6: (groupid=0, jobs=1): err= 0: pid=1204690: Sun Jul 14 01:11:35 2024 00:25:47.182 write: IOPS=402, BW=101MiB/s (106MB/s)(1024MiB/10168msec); 0 zone resets 00:25:47.182 slat (usec): min=21, max=246261, avg=1936.70, stdev=7052.53 00:25:47.182 clat (usec): min=1373, max=835052, avg=156897.13, stdev=143616.89 00:25:47.182 lat (usec): min=1409, max=835107, avg=158833.83, stdev=145370.98 00:25:47.182 clat percentiles (msec): 00:25:47.182 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 30], 20.00th=[ 69], 00:25:47.182 | 30.00th=[ 94], 40.00th=[ 106], 50.00th=[ 116], 60.00th=[ 142], 00:25:47.182 | 70.00th=[ 171], 80.00th=[ 211], 90.00th=[ 288], 95.00th=[ 443], 00:25:47.182 | 99.00th=[ 751], 99.50th=[ 776], 99.90th=[ 802], 99.95th=[ 835], 00:25:47.182 | 99.99th=[ 835] 00:25:47.182 bw ( KiB/s): min=18432, max=174080, per=8.75%, avg=103228.35, stdev=47046.14, samples=20 00:25:47.182 iops : min= 72, max= 680, avg=403.20, stdev=183.79, samples=20 00:25:47.182 lat (msec) : 2=0.17%, 4=0.51%, 10=2.49%, 20=3.74%, 50=9.55% 00:25:47.182 lat (msec) : 100=18.93%, 250=52.50%, 500=7.69%, 750=3.47%, 1000=0.95% 00:25:47.182 cpu : usr=1.12%, sys=1.39%, ctx=2122, majf=0, minf=1 00:25:47.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:47.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.182 issued rwts: total=0,4095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.182 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.182 job7: (groupid=0, jobs=1): err= 0: pid=1204691: Sun Jul 14 01:11:35 2024 00:25:47.182 write: IOPS=616, BW=154MiB/s (162MB/s)(1556MiB/10092msec); 0 zone resets 00:25:47.182 slat (usec): min=19, max=54348, avg=1023.08, stdev=2939.07 00:25:47.182 clat (usec): min=1522, max=344340, avg=102729.77, stdev=57099.06 00:25:47.182 lat (usec): min=1576, max=344394, avg=103752.85, stdev=57774.65 00:25:47.182 clat percentiles (msec): 00:25:47.182 | 1.00th=[ 11], 5.00th=[ 30], 10.00th=[ 47], 20.00th=[ 68], 00:25:47.182 | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 99], 00:25:47.182 | 70.00th=[ 116], 80.00th=[ 140], 90.00th=[ 182], 95.00th=[ 213], 00:25:47.182 | 99.00th=[ 296], 99.50th=[ 313], 99.90th=[ 342], 99.95th=[ 342], 00:25:47.182 | 99.99th=[ 347] 00:25:47.182 bw ( KiB/s): min=62464, max=224256, per=13.37%, avg=157657.90, stdev=49929.61, samples=20 00:25:47.182 iops : min= 244, max= 876, avg=615.80, stdev=195.00, samples=20 00:25:47.182 lat (msec) : 2=0.03%, 4=0.13%, 10=0.72%, 20=1.77%, 50=8.93% 00:25:47.182 lat (msec) : 100=49.22%, 250=36.24%, 500=2.96% 00:25:47.182 cpu : usr=1.94%, sys=2.20%, ctx=3606, majf=0, minf=1 00:25:47.182 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:47.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.182 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.182 issued rwts: total=0,6223,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.182 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.182 job8: (groupid=0, jobs=1): err= 0: pid=1204692: Sun Jul 14 01:11:35 2024 00:25:47.182 write: IOPS=572, BW=143MiB/s (150MB/s)(1450MiB/10136msec); 0 zone resets 00:25:47.182 slat (usec): min=17, max=57845, avg=781.85, stdev=2656.13 00:25:47.182 clat (msec): min=2, max=776, avg=110.99, stdev=87.89 00:25:47.182 lat (msec): min=2, max=776, avg=111.77, stdev=88.23 00:25:47.182 clat percentiles (msec): 00:25:47.182 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 26], 20.00th=[ 46], 00:25:47.182 | 30.00th=[ 68], 40.00th=[ 78], 50.00th=[ 95], 60.00th=[ 110], 00:25:47.183 | 70.00th=[ 132], 80.00th=[ 163], 90.00th=[ 197], 95.00th=[ 264], 00:25:47.183 | 99.00th=[ 468], 99.50th=[ 609], 99.90th=[ 709], 99.95th=[ 751], 00:25:47.183 | 99.99th=[ 776] 00:25:47.183 bw ( KiB/s): min=91136, max=230400, per=12.46%, avg=146908.75, stdev=38902.46, samples=20 00:25:47.183 iops : min= 356, max= 900, avg=573.85, stdev=151.96, samples=20 00:25:47.183 lat (msec) : 4=1.40%, 10=2.72%, 20=4.72%, 50=12.86%, 100=32.30% 00:25:47.183 lat (msec) : 250=40.58%, 500=4.62%, 750=0.74%, 1000=0.05% 00:25:47.183 cpu : usr=1.47%, sys=1.94%, ctx=4127, majf=0, minf=1 00:25:47.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:47.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.183 issued rwts: total=0,5801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.183 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.183 job9: (groupid=0, jobs=1): err= 0: pid=1204693: Sun Jul 14 01:11:35 2024 00:25:47.183 write: IOPS=358, BW=89.6MiB/s (94.0MB/s)(902MiB/10064msec); 0 zone resets 00:25:47.183 slat (usec): min=24, max=191238, avg=1997.63, stdev=7936.55 00:25:47.183 clat (usec): min=1712, max=849645, avg=176489.34, stdev=152643.34 00:25:47.183 lat (usec): min=1797, max=849704, avg=178486.96, stdev=154733.53 00:25:47.183 clat percentiles (msec): 00:25:47.183 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 45], 20.00th=[ 66], 00:25:47.183 | 30.00th=[ 89], 40.00th=[ 103], 50.00th=[ 125], 60.00th=[ 171], 00:25:47.183 | 70.00th=[ 203], 80.00th=[ 259], 90.00th=[ 351], 95.00th=[ 558], 00:25:47.183 | 99.00th=[ 726], 99.50th=[ 760], 99.90th=[ 835], 99.95th=[ 852], 00:25:47.183 | 99.99th=[ 852] 00:25:47.183 bw ( KiB/s): min=18432, max=211456, per=7.69%, avg=90733.70, stdev=56948.58, samples=20 00:25:47.183 iops : min= 72, max= 826, avg=354.40, stdev=222.46, samples=20 00:25:47.183 lat (msec) : 2=0.06%, 4=0.44%, 10=3.60%, 20=2.25%, 50=6.27% 00:25:47.183 lat (msec) : 100=25.89%, 250=40.37%, 500=15.53%, 750=4.85%, 1000=0.75% 00:25:47.183 cpu : usr=1.19%, sys=1.26%, ctx=2244, majf=0, minf=1 00:25:47.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:25:47.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.183 issued rwts: total=0,3607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.183 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.183 job10: (groupid=0, jobs=1): err= 0: pid=1204694: Sun Jul 14 01:11:35 2024 00:25:47.183 write: IOPS=381, BW=95.4MiB/s (100MB/s)(972MiB/10184msec); 0 zone resets 00:25:47.183 slat (usec): min=22, max=270548, avg=1925.15, stdev=6800.57 00:25:47.183 clat (msec): min=2, max=736, avg=165.55, stdev=120.46 00:25:47.183 lat (msec): min=2, max=736, avg=167.47, stdev=121.66 00:25:47.183 clat percentiles (msec): 00:25:47.183 | 1.00th=[ 6], 5.00th=[ 10], 10.00th=[ 17], 20.00th=[ 70], 00:25:47.183 | 30.00th=[ 113], 40.00th=[ 142], 50.00th=[ 148], 60.00th=[ 171], 00:25:47.183 | 70.00th=[ 197], 80.00th=[ 234], 90.00th=[ 296], 95.00th=[ 405], 00:25:47.183 | 99.00th=[ 667], 99.50th=[ 709], 99.90th=[ 735], 99.95th=[ 735], 00:25:47.183 | 99.99th=[ 735] 00:25:47.183 bw ( KiB/s): min=38912, max=181760, per=8.30%, avg=97920.25, stdev=36495.13, samples=20 00:25:47.183 iops : min= 152, max= 710, avg=382.45, stdev=142.61, samples=20 00:25:47.183 lat (msec) : 4=0.41%, 10=4.76%, 20=5.81%, 50=5.94%, 100=10.73% 00:25:47.183 lat (msec) : 250=56.17%, 500=14.35%, 750=1.83% 00:25:47.183 cpu : usr=1.22%, sys=1.31%, ctx=2190, majf=0, minf=1 00:25:47.183 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:47.183 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.183 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:47.183 issued rwts: total=0,3888,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.183 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:47.183 00:25:47.183 Run status group 0 (all jobs): 00:25:47.183 WRITE: bw=1152MiB/s (1208MB/s), 69.3MiB/s-154MiB/s (72.7MB/s-162MB/s), io=11.5GiB (12.3GB), run=10064-10184msec 00:25:47.183 00:25:47.183 Disk stats (read/write): 00:25:47.183 nvme0n1: ios=49/6308, merge=0/0, ticks=50/1243740, in_queue=1243790, util=97.39% 00:25:47.183 nvme10n1: ios=42/6753, merge=0/0, ticks=189/1209419, in_queue=1209608, util=98.67% 00:25:47.183 nvme1n1: ios=45/5636, merge=0/0, ticks=242/1234149, in_queue=1234391, util=99.41% 00:25:47.183 nvme2n1: ios=51/8915, merge=0/0, ticks=4542/1173261, in_queue=1177803, util=100.00% 00:25:47.183 nvme3n1: ios=0/9149, merge=0/0, ticks=0/1221768, in_queue=1221768, util=97.77% 00:25:47.183 nvme4n1: ios=46/9275, merge=0/0, ticks=119/1242086, in_queue=1242205, util=99.05% 00:25:47.183 nvme5n1: ios=0/8029, merge=0/0, ticks=0/1203798, in_queue=1203798, util=98.25% 00:25:47.183 nvme6n1: ios=41/12238, merge=0/0, ticks=1498/1223780, in_queue=1225278, util=100.00% 00:25:47.183 nvme7n1: ios=0/11435, merge=0/0, ticks=0/1231964, in_queue=1231964, util=98.70% 00:25:47.183 nvme8n1: ios=39/6934, merge=0/0, ticks=1753/1218076, in_queue=1219829, util=100.00% 00:25:47.183 nvme9n1: ios=37/7756, merge=0/0, ticks=922/1244430, in_queue=1245352, util=100.00% 00:25:47.183 01:11:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:47.183 01:11:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:47.183 01:11:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.183 01:11:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:47.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:47.183 01:11:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:47.183 01:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:47.183 01:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:47.183 01:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:25:47.183 01:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:47.183 01:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:25:47.183 01:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:47.183 01:11:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:47.183 01:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.183 01:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.183 01:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.183 01:11:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.183 01:11:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:47.183 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:47.183 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.183 01:11:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:47.444 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:47.444 01:11:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:47.444 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:47.444 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:47.444 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:25:47.444 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:47.444 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:25:47.444 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:47.444 01:11:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:47.444 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.444 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.444 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.444 01:11:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.444 01:11:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:47.704 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:47.704 01:11:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:47.704 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:47.704 01:11:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:47.704 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:25:47.704 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:47.704 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:25:47.704 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:47.704 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:47.704 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.704 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.704 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.704 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.704 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:47.965 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:47.965 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:47.965 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:47.965 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:47.965 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:25:47.965 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:47.965 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:25:47.965 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:47.965 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:47.965 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.965 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:47.965 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.965 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.965 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:48.223 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:48.223 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.223 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:48.481 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:48.481 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:25:48.481 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:48.482 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:48.482 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.482 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.482 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.482 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:48.482 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:48.742 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:48.742 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:48.742 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:48.742 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:48.742 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:25:48.742 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:48.742 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:25:48.742 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:48.742 01:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:48.742 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.742 01:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:48.742 rmmod nvme_tcp 00:25:48.742 rmmod nvme_fabrics 00:25:48.742 rmmod nvme_keyring 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1199228 ']' 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1199228 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 1199228 ']' 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 1199228 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1199228 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1199228' 00:25:48.742 killing process with pid 1199228 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 1199228 00:25:48.742 01:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 1199228 00:25:49.313 01:11:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:49.313 01:11:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:49.313 01:11:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:49.313 01:11:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:49.313 01:11:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:49.313 01:11:38 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.313 01:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:49.313 01:11:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.854 01:11:40 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:51.854 00:25:51.854 real 1m0.550s 00:25:51.854 user 3m22.342s 00:25:51.854 sys 0m23.515s 00:25:51.854 01:11:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:51.854 01:11:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.854 ************************************ 00:25:51.854 END TEST nvmf_multiconnection 00:25:51.854 ************************************ 00:25:51.854 01:11:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:51.854 01:11:40 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:51.854 01:11:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:51.854 01:11:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:51.854 01:11:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:51.854 ************************************ 00:25:51.854 START TEST nvmf_initiator_timeout 00:25:51.854 ************************************ 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:51.854 * Looking for test storage... 00:25:51.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:51.854 01:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:53.232 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.232 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:53.233 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:53.233 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:53.233 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.233 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:53.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:25:53.492 00:25:53.492 --- 10.0.0.2 ping statistics --- 00:25:53.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.492 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:25:53.492 00:25:53.492 --- 10.0.0.1 ping statistics --- 00:25:53.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.492 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1207879 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1207879 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 1207879 ']' 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:53.492 01:11:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.492 [2024-07-14 01:11:42.798800] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:53.492 [2024-07-14 01:11:42.798902] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.492 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.492 [2024-07-14 01:11:42.868604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:53.750 [2024-07-14 01:11:42.960072] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.750 [2024-07-14 01:11:42.960130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.750 [2024-07-14 01:11:42.960153] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.750 [2024-07-14 01:11:42.960164] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.750 [2024-07-14 01:11:42.960188] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.750 [2024-07-14 01:11:42.960294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.750 [2024-07-14 01:11:42.960359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:53.750 [2024-07-14 01:11:42.960455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:53.750 [2024-07-14 01:11:42.960458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.750 Malloc0 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.750 Delay0 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.750 [2024-07-14 01:11:43.145087] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.750 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.008 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.008 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:54.008 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.008 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.008 [2024-07-14 01:11:43.173371] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.008 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.008 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:54.574 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:54.574 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:25:54.574 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:54.574 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:54.574 01:11:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:25:56.476 01:11:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:56.476 01:11:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:56.476 01:11:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:56.476 01:11:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:56.476 01:11:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:56.476 01:11:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:25:56.476 01:11:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1208306 00:25:56.476 01:11:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:56.476 01:11:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:56.476 [global] 00:25:56.476 thread=1 00:25:56.477 invalidate=1 00:25:56.477 rw=write 00:25:56.477 time_based=1 00:25:56.477 runtime=60 00:25:56.477 ioengine=libaio 00:25:56.477 direct=1 00:25:56.477 bs=4096 00:25:56.477 iodepth=1 00:25:56.477 norandommap=0 00:25:56.477 numjobs=1 00:25:56.477 00:25:56.477 verify_dump=1 00:25:56.477 verify_backlog=512 00:25:56.477 verify_state_save=0 00:25:56.477 do_verify=1 00:25:56.477 verify=crc32c-intel 00:25:56.477 [job0] 00:25:56.477 filename=/dev/nvme0n1 00:25:56.766 Could not set queue depth (nvme0n1) 00:25:56.766 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:56.766 fio-3.35 00:25:56.766 Starting 1 thread 00:26:00.055 01:11:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:00.055 01:11:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.055 01:11:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.055 true 00:26:00.055 01:11:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.055 01:11:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:00.055 01:11:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.055 01:11:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.055 true 00:26:00.055 01:11:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.055 01:11:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:00.055 01:11:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.055 01:11:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.055 true 00:26:00.055 01:11:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.055 01:11:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:00.055 01:11:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.055 01:11:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:00.055 true 00:26:00.055 01:11:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.055 01:11:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:02.591 01:11:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:02.591 01:11:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.591 01:11:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:02.591 true 00:26:02.591 01:11:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.591 01:11:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:02.591 01:11:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.591 01:11:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:02.591 true 00:26:02.591 01:11:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.591 01:11:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:02.591 01:11:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.591 01:11:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:02.591 true 00:26:02.591 01:11:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.591 01:11:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:02.591 01:11:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.591 01:11:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:02.591 true 00:26:02.591 01:11:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.591 01:11:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:02.591 01:11:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1208306 00:26:58.821 00:26:58.821 job0: (groupid=0, jobs=1): err= 0: pid=1208375: Sun Jul 14 01:12:46 2024 00:26:58.821 read: IOPS=120, BW=483KiB/s (495kB/s)(28.3MiB/60039msec) 00:26:58.821 slat (usec): min=5, max=3858, avg=16.52, stdev=46.03 00:26:58.821 clat (usec): min=332, max=40917k, avg=7857.11, stdev=480493.97 00:26:58.821 lat (usec): min=339, max=40917k, avg=7873.62, stdev=480494.02 00:26:58.821 clat percentiles (usec): 00:26:58.821 | 1.00th=[ 351], 5.00th=[ 359], 10.00th=[ 367], 00:26:58.821 | 20.00th=[ 379], 30.00th=[ 388], 40.00th=[ 400], 00:26:58.821 | 50.00th=[ 412], 60.00th=[ 433], 70.00th=[ 457], 00:26:58.821 | 80.00th=[ 494], 90.00th=[ 578], 95.00th=[ 660], 00:26:58.821 | 99.00th=[ 41681], 99.50th=[ 42206], 99.90th=[ 42206], 00:26:58.821 | 99.95th=[ 42206], 99.99th=[17112761] 00:26:58.821 write: IOPS=127, BW=512KiB/s (524kB/s)(30.0MiB/60039msec); 0 zone resets 00:26:58.821 slat (usec): min=7, max=29144, avg=27.45, stdev=332.57 00:26:58.821 clat (usec): min=229, max=556, avg=343.95, stdev=57.19 00:26:58.821 lat (usec): min=236, max=29601, avg=371.40, stdev=340.09 00:26:58.821 clat percentiles (usec): 00:26:58.821 | 1.00th=[ 241], 5.00th=[ 253], 10.00th=[ 269], 20.00th=[ 293], 00:26:58.821 | 30.00th=[ 310], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 355], 00:26:58.821 | 70.00th=[ 383], 80.00th=[ 400], 90.00th=[ 420], 95.00th=[ 437], 00:26:58.821 | 99.00th=[ 469], 99.50th=[ 482], 99.90th=[ 529], 99.95th=[ 537], 00:26:58.821 | 99.99th=[ 553] 00:26:58.821 bw ( KiB/s): min= 2104, max= 6736, per=100.00%, avg=4388.57, stdev=1182.36, samples=14 00:26:58.821 iops : min= 526, max= 1684, avg=1097.14, stdev=295.59, samples=14 00:26:58.821 lat (usec) : 250=1.91%, 500=88.72%, 750=7.21%, 1000=0.01% 00:26:58.821 lat (msec) : 2=0.02%, 50=2.12%, >=2000=0.01% 00:26:58.821 cpu : usr=0.36%, sys=0.65%, ctx=14936, majf=0, minf=2 00:26:58.821 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:58.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:58.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:58.821 issued rwts: total=7253,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:58.821 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:58.821 00:26:58.821 Run status group 0 (all jobs): 00:26:58.821 READ: bw=483KiB/s (495kB/s), 483KiB/s-483KiB/s (495kB/s-495kB/s), io=28.3MiB (29.7MB), run=60039-60039msec 00:26:58.821 WRITE: bw=512KiB/s (524kB/s), 512KiB/s-512KiB/s (524kB/s-524kB/s), io=30.0MiB (31.5MB), run=60039-60039msec 00:26:58.821 00:26:58.821 Disk stats (read/write): 00:26:58.821 nvme0n1: ios=7301/7680, merge=0/0, ticks=17129/2435, in_queue=19564, util=99.69% 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:58.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:58.821 nvmf hotplug test: fio successful as expected 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:58.821 rmmod nvme_tcp 00:26:58.821 rmmod nvme_fabrics 00:26:58.821 rmmod nvme_keyring 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1207879 ']' 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1207879 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 1207879 ']' 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 1207879 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1207879 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1207879' 00:26:58.821 killing process with pid 1207879 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 1207879 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 1207879 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:58.821 01:12:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.389 01:12:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:59.389 00:26:59.389 real 1m8.073s 00:26:59.389 user 4m8.915s 00:26:59.389 sys 0m7.634s 00:26:59.389 01:12:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:59.389 01:12:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.389 ************************************ 00:26:59.389 END TEST nvmf_initiator_timeout 00:26:59.389 ************************************ 00:26:59.649 01:12:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:59.649 01:12:48 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:59.649 01:12:48 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:59.649 01:12:48 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:59.649 01:12:48 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:59.649 01:12:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:01.553 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:01.553 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:01.553 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:01.553 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:27:01.553 01:12:50 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:01.553 01:12:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:01.553 01:12:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.553 01:12:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:01.553 ************************************ 00:27:01.553 START TEST nvmf_perf_adq 00:27:01.553 ************************************ 00:27:01.553 01:12:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:01.553 * Looking for test storage... 00:27:01.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:01.553 01:12:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:01.553 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:01.553 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.553 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.553 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.553 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.553 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:01.553 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:01.553 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.553 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:01.553 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.553 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:01.553 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:01.553 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:01.553 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.553 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:01.554 01:12:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:04.111 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:04.111 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.111 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:04.112 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:04.112 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:27:04.112 01:12:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:04.373 01:12:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:06.276 01:12:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:11.549 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:11.549 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.549 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:11.550 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:11.550 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:11.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:27:11.550 00:27:11.550 --- 10.0.0.2 ping statistics --- 00:27:11.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.550 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:11.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:27:11.550 00:27:11.550 --- 10.0.0.1 ping statistics --- 00:27:11.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.550 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1220506 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1220506 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1220506 ']' 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:11.550 01:13:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.550 [2024-07-14 01:13:00.777322] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:11.550 [2024-07-14 01:13:00.777414] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:11.550 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.550 [2024-07-14 01:13:00.847111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:11.550 [2024-07-14 01:13:00.939957] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:11.550 [2024-07-14 01:13:00.940006] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:11.550 [2024-07-14 01:13:00.940021] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:11.550 [2024-07-14 01:13:00.940034] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:11.550 [2024-07-14 01:13:00.940044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:11.550 [2024-07-14 01:13:00.940118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.550 [2024-07-14 01:13:00.940154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:11.550 [2024-07-14 01:13:00.940182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.550 [2024-07-14 01:13:00.940183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:11.810 01:13:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:11.810 01:13:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:11.810 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:11.810 01:13:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:11.810 01:13:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.810 01:13:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:11.810 01:13:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.810 [2024-07-14 01:13:01.160905] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.810 Malloc1 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.810 [2024-07-14 01:13:01.214232] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1220652 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:11.810 01:13:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:12.071 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.977 01:13:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:13.977 01:13:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.977 01:13:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:13.977 01:13:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.977 01:13:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:13.977 "tick_rate": 2700000000, 00:27:13.977 "poll_groups": [ 00:27:13.977 { 00:27:13.977 "name": "nvmf_tgt_poll_group_000", 00:27:13.977 "admin_qpairs": 1, 00:27:13.977 "io_qpairs": 1, 00:27:13.977 "current_admin_qpairs": 1, 00:27:13.977 "current_io_qpairs": 1, 00:27:13.977 "pending_bdev_io": 0, 00:27:13.977 "completed_nvme_io": 19252, 00:27:13.977 "transports": [ 00:27:13.977 { 00:27:13.977 "trtype": "TCP" 00:27:13.977 } 00:27:13.977 ] 00:27:13.977 }, 00:27:13.977 { 00:27:13.977 "name": "nvmf_tgt_poll_group_001", 00:27:13.977 "admin_qpairs": 0, 00:27:13.977 "io_qpairs": 1, 00:27:13.977 "current_admin_qpairs": 0, 00:27:13.977 "current_io_qpairs": 1, 00:27:13.977 "pending_bdev_io": 0, 00:27:13.977 "completed_nvme_io": 19311, 00:27:13.977 "transports": [ 00:27:13.977 { 00:27:13.977 "trtype": "TCP" 00:27:13.977 } 00:27:13.977 ] 00:27:13.977 }, 00:27:13.977 { 00:27:13.977 "name": "nvmf_tgt_poll_group_002", 00:27:13.977 "admin_qpairs": 0, 00:27:13.977 "io_qpairs": 1, 00:27:13.977 "current_admin_qpairs": 0, 00:27:13.977 "current_io_qpairs": 1, 00:27:13.977 "pending_bdev_io": 0, 00:27:13.977 "completed_nvme_io": 19018, 00:27:13.977 "transports": [ 00:27:13.977 { 00:27:13.977 "trtype": "TCP" 00:27:13.977 } 00:27:13.977 ] 00:27:13.977 }, 00:27:13.977 { 00:27:13.977 "name": "nvmf_tgt_poll_group_003", 00:27:13.977 "admin_qpairs": 0, 00:27:13.977 "io_qpairs": 1, 00:27:13.977 "current_admin_qpairs": 0, 00:27:13.977 "current_io_qpairs": 1, 00:27:13.977 "pending_bdev_io": 0, 00:27:13.977 "completed_nvme_io": 19268, 00:27:13.977 "transports": [ 00:27:13.977 { 00:27:13.977 "trtype": "TCP" 00:27:13.977 } 00:27:13.977 ] 00:27:13.977 } 00:27:13.977 ] 00:27:13.977 }' 00:27:13.977 01:13:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:13.977 01:13:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:13.977 01:13:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:13.977 01:13:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:13.977 01:13:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1220652 00:27:22.093 Initializing NVMe Controllers 00:27:22.093 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:22.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:22.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:22.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:22.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:22.093 Initialization complete. Launching workers. 00:27:22.093 ======================================================== 00:27:22.093 Latency(us) 00:27:22.093 Device Information : IOPS MiB/s Average min max 00:27:22.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10536.10 41.16 6074.57 2351.82 9067.16 00:27:22.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10608.00 41.44 6033.44 2100.33 9225.63 00:27:22.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10387.30 40.58 6162.71 1525.97 8950.12 00:27:22.093 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10517.90 41.09 6085.00 2097.50 8025.04 00:27:22.093 ======================================================== 00:27:22.093 Total : 42049.29 164.26 6088.58 1525.97 9225.63 00:27:22.093 00:27:22.093 01:13:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:22.093 01:13:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:22.093 01:13:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:22.093 01:13:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:22.093 01:13:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:22.093 01:13:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:22.093 01:13:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:22.093 rmmod nvme_tcp 00:27:22.093 rmmod nvme_fabrics 00:27:22.093 rmmod nvme_keyring 00:27:22.093 01:13:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:22.093 01:13:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:22.093 01:13:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:22.093 01:13:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1220506 ']' 00:27:22.093 01:13:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1220506 00:27:22.093 01:13:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1220506 ']' 00:27:22.093 01:13:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1220506 00:27:22.093 01:13:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:22.093 01:13:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:22.093 01:13:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1220506 00:27:22.351 01:13:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:22.351 01:13:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:22.351 01:13:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1220506' 00:27:22.351 killing process with pid 1220506 00:27:22.351 01:13:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1220506 00:27:22.351 01:13:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1220506 00:27:22.351 01:13:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:22.351 01:13:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:22.351 01:13:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:22.351 01:13:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:22.351 01:13:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:22.351 01:13:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.351 01:13:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:22.351 01:13:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.885 01:13:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:24.885 01:13:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:24.885 01:13:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:25.151 01:13:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:27.057 01:13:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:32.370 01:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:32.370 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:32.370 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.370 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:32.370 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:32.371 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:32.371 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:32.371 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:32.371 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:32.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:32.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:27:32.371 00:27:32.371 --- 10.0.0.2 ping statistics --- 00:27:32.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.371 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:32.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:32.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:27:32.371 00:27:32.371 --- 10.0.0.1 ping statistics --- 00:27:32.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:32.371 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:32.371 net.core.busy_poll = 1 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:32.371 net.core.busy_read = 1 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1223261 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1223261 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1223261 ']' 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:32.371 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.371 [2024-07-14 01:13:21.712899] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:32.371 [2024-07-14 01:13:21.712989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.371 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.371 [2024-07-14 01:13:21.776676] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:32.629 [2024-07-14 01:13:21.862453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.629 [2024-07-14 01:13:21.862507] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.629 [2024-07-14 01:13:21.862520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.629 [2024-07-14 01:13:21.862531] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.629 [2024-07-14 01:13:21.862541] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.629 [2024-07-14 01:13:21.862618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.629 [2024-07-14 01:13:21.862685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:32.629 [2024-07-14 01:13:21.862751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:32.629 [2024-07-14 01:13:21.862753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.629 01:13:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.888 [2024-07-14 01:13:22.086745] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.888 Malloc1 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:32.888 [2024-07-14 01:13:22.138493] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1223292 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:32.888 01:13:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:32.888 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.788 01:13:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:34.788 01:13:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.788 01:13:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:34.788 01:13:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.788 01:13:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:34.788 "tick_rate": 2700000000, 00:27:34.788 "poll_groups": [ 00:27:34.788 { 00:27:34.788 "name": "nvmf_tgt_poll_group_000", 00:27:34.788 "admin_qpairs": 1, 00:27:34.788 "io_qpairs": 3, 00:27:34.788 "current_admin_qpairs": 1, 00:27:34.788 "current_io_qpairs": 3, 00:27:34.788 "pending_bdev_io": 0, 00:27:34.788 "completed_nvme_io": 27122, 00:27:34.788 "transports": [ 00:27:34.788 { 00:27:34.788 "trtype": "TCP" 00:27:34.788 } 00:27:34.788 ] 00:27:34.788 }, 00:27:34.788 { 00:27:34.788 "name": "nvmf_tgt_poll_group_001", 00:27:34.788 "admin_qpairs": 0, 00:27:34.788 "io_qpairs": 1, 00:27:34.788 "current_admin_qpairs": 0, 00:27:34.788 "current_io_qpairs": 1, 00:27:34.788 "pending_bdev_io": 0, 00:27:34.788 "completed_nvme_io": 24638, 00:27:34.788 "transports": [ 00:27:34.788 { 00:27:34.788 "trtype": "TCP" 00:27:34.788 } 00:27:34.788 ] 00:27:34.788 }, 00:27:34.788 { 00:27:34.788 "name": "nvmf_tgt_poll_group_002", 00:27:34.788 "admin_qpairs": 0, 00:27:34.788 "io_qpairs": 0, 00:27:34.788 "current_admin_qpairs": 0, 00:27:34.788 "current_io_qpairs": 0, 00:27:34.788 "pending_bdev_io": 0, 00:27:34.788 "completed_nvme_io": 0, 00:27:34.788 "transports": [ 00:27:34.788 { 00:27:34.788 "trtype": "TCP" 00:27:34.788 } 00:27:34.788 ] 00:27:34.788 }, 00:27:34.788 { 00:27:34.788 "name": "nvmf_tgt_poll_group_003", 00:27:34.788 "admin_qpairs": 0, 00:27:34.788 "io_qpairs": 0, 00:27:34.788 "current_admin_qpairs": 0, 00:27:34.788 "current_io_qpairs": 0, 00:27:34.788 "pending_bdev_io": 0, 00:27:34.788 "completed_nvme_io": 0, 00:27:34.788 "transports": [ 00:27:34.788 { 00:27:34.788 "trtype": "TCP" 00:27:34.788 } 00:27:34.788 ] 00:27:34.788 } 00:27:34.788 ] 00:27:34.788 }' 00:27:34.788 01:13:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:34.788 01:13:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:35.047 01:13:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:35.047 01:13:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:35.047 01:13:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1223292 00:27:43.162 Initializing NVMe Controllers 00:27:43.162 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:43.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:43.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:43.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:43.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:43.162 Initialization complete. Launching workers. 00:27:43.162 ======================================================== 00:27:43.162 Latency(us) 00:27:43.162 Device Information : IOPS MiB/s Average min max 00:27:43.162 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4666.10 18.23 13730.12 1929.99 62598.80 00:27:43.162 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4670.30 18.24 13704.51 2510.07 63092.30 00:27:43.162 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12971.69 50.67 4933.60 1526.04 7382.90 00:27:43.162 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4805.60 18.77 13319.13 2646.87 58884.70 00:27:43.162 ======================================================== 00:27:43.162 Total : 27113.69 105.91 9444.45 1526.04 63092.30 00:27:43.162 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:43.162 rmmod nvme_tcp 00:27:43.162 rmmod nvme_fabrics 00:27:43.162 rmmod nvme_keyring 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1223261 ']' 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1223261 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1223261 ']' 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1223261 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1223261 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1223261' 00:27:43.162 killing process with pid 1223261 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1223261 00:27:43.162 01:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1223261 00:27:43.421 01:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:43.421 01:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:43.421 01:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:43.421 01:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:43.421 01:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:43.421 01:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.421 01:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.421 01:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.329 01:13:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:45.329 01:13:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:45.329 00:27:45.329 real 0m43.862s 00:27:45.329 user 2m37.702s 00:27:45.329 sys 0m10.370s 00:27:45.329 01:13:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:45.329 01:13:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:45.329 ************************************ 00:27:45.329 END TEST nvmf_perf_adq 00:27:45.329 ************************************ 00:27:45.329 01:13:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:45.329 01:13:34 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:45.329 01:13:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:45.329 01:13:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:45.329 01:13:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:45.329 ************************************ 00:27:45.329 START TEST nvmf_shutdown 00:27:45.329 ************************************ 00:27:45.329 01:13:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:45.588 * Looking for test storage... 00:27:45.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:45.588 ************************************ 00:27:45.588 START TEST nvmf_shutdown_tc1 00:27:45.588 ************************************ 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:45.588 01:13:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:47.488 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:47.488 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:47.488 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:47.488 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:47.488 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:47.489 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:47.489 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.489 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.489 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.489 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:47.489 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.489 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.489 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:47.489 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.489 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.489 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:47.489 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:47.489 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.489 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.489 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.489 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.489 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:47.489 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.746 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:47.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:27:47.747 00:27:47.747 --- 10.0.0.2 ping statistics --- 00:27:47.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.747 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:27:47.747 00:27:47.747 --- 10.0.0.1 ping statistics --- 00:27:47.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.747 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1226462 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1226462 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1226462 ']' 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:47.747 01:13:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:47.747 [2024-07-14 01:13:37.008629] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:47.747 [2024-07-14 01:13:37.008704] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.747 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.747 [2024-07-14 01:13:37.075651] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:48.005 [2024-07-14 01:13:37.168380] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:48.005 [2024-07-14 01:13:37.168442] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:48.005 [2024-07-14 01:13:37.168468] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:48.005 [2024-07-14 01:13:37.168483] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:48.005 [2024-07-14 01:13:37.168495] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:48.005 [2024-07-14 01:13:37.168577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:48.005 [2024-07-14 01:13:37.168695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:48.005 [2024-07-14 01:13:37.168763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.005 [2024-07-14 01:13:37.168761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:48.005 [2024-07-14 01:13:37.310516] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.005 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:48.005 Malloc1 00:27:48.005 [2024-07-14 01:13:37.386483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.005 Malloc2 00:27:48.263 Malloc3 00:27:48.263 Malloc4 00:27:48.263 Malloc5 00:27:48.263 Malloc6 00:27:48.263 Malloc7 00:27:48.522 Malloc8 00:27:48.522 Malloc9 00:27:48.522 Malloc10 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1226637 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1226637 /var/tmp/bdevperf.sock 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1226637 ']' 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:48.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.522 { 00:27:48.522 "params": { 00:27:48.522 "name": "Nvme$subsystem", 00:27:48.522 "trtype": "$TEST_TRANSPORT", 00:27:48.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.522 "adrfam": "ipv4", 00:27:48.522 "trsvcid": "$NVMF_PORT", 00:27:48.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.522 "hdgst": ${hdgst:-false}, 00:27:48.522 "ddgst": ${ddgst:-false} 00:27:48.522 }, 00:27:48.522 "method": "bdev_nvme_attach_controller" 00:27:48.522 } 00:27:48.522 EOF 00:27:48.522 )") 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.522 { 00:27:48.522 "params": { 00:27:48.522 "name": "Nvme$subsystem", 00:27:48.522 "trtype": "$TEST_TRANSPORT", 00:27:48.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.522 "adrfam": "ipv4", 00:27:48.522 "trsvcid": "$NVMF_PORT", 00:27:48.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.522 "hdgst": ${hdgst:-false}, 00:27:48.522 "ddgst": ${ddgst:-false} 00:27:48.522 }, 00:27:48.522 "method": "bdev_nvme_attach_controller" 00:27:48.522 } 00:27:48.522 EOF 00:27:48.522 )") 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.522 { 00:27:48.522 "params": { 00:27:48.522 "name": "Nvme$subsystem", 00:27:48.522 "trtype": "$TEST_TRANSPORT", 00:27:48.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.522 "adrfam": "ipv4", 00:27:48.522 "trsvcid": "$NVMF_PORT", 00:27:48.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.522 "hdgst": ${hdgst:-false}, 00:27:48.522 "ddgst": ${ddgst:-false} 00:27:48.522 }, 00:27:48.522 "method": "bdev_nvme_attach_controller" 00:27:48.522 } 00:27:48.522 EOF 00:27:48.522 )") 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.522 { 00:27:48.522 "params": { 00:27:48.522 "name": "Nvme$subsystem", 00:27:48.522 "trtype": "$TEST_TRANSPORT", 00:27:48.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.522 "adrfam": "ipv4", 00:27:48.522 "trsvcid": "$NVMF_PORT", 00:27:48.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.522 "hdgst": ${hdgst:-false}, 00:27:48.522 "ddgst": ${ddgst:-false} 00:27:48.522 }, 00:27:48.522 "method": "bdev_nvme_attach_controller" 00:27:48.522 } 00:27:48.522 EOF 00:27:48.522 )") 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.522 { 00:27:48.522 "params": { 00:27:48.522 "name": "Nvme$subsystem", 00:27:48.522 "trtype": "$TEST_TRANSPORT", 00:27:48.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.522 "adrfam": "ipv4", 00:27:48.522 "trsvcid": "$NVMF_PORT", 00:27:48.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.522 "hdgst": ${hdgst:-false}, 00:27:48.522 "ddgst": ${ddgst:-false} 00:27:48.522 }, 00:27:48.522 "method": "bdev_nvme_attach_controller" 00:27:48.522 } 00:27:48.522 EOF 00:27:48.522 )") 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.522 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.522 { 00:27:48.523 "params": { 00:27:48.523 "name": "Nvme$subsystem", 00:27:48.523 "trtype": "$TEST_TRANSPORT", 00:27:48.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.523 "adrfam": "ipv4", 00:27:48.523 "trsvcid": "$NVMF_PORT", 00:27:48.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.523 "hdgst": ${hdgst:-false}, 00:27:48.523 "ddgst": ${ddgst:-false} 00:27:48.523 }, 00:27:48.523 "method": "bdev_nvme_attach_controller" 00:27:48.523 } 00:27:48.523 EOF 00:27:48.523 )") 00:27:48.523 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:48.523 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.523 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.523 { 00:27:48.523 "params": { 00:27:48.523 "name": "Nvme$subsystem", 00:27:48.523 "trtype": "$TEST_TRANSPORT", 00:27:48.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.523 "adrfam": "ipv4", 00:27:48.523 "trsvcid": "$NVMF_PORT", 00:27:48.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.523 "hdgst": ${hdgst:-false}, 00:27:48.523 "ddgst": ${ddgst:-false} 00:27:48.523 }, 00:27:48.523 "method": "bdev_nvme_attach_controller" 00:27:48.523 } 00:27:48.523 EOF 00:27:48.523 )") 00:27:48.523 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:48.523 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.523 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.523 { 00:27:48.523 "params": { 00:27:48.523 "name": "Nvme$subsystem", 00:27:48.523 "trtype": "$TEST_TRANSPORT", 00:27:48.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.523 "adrfam": "ipv4", 00:27:48.523 "trsvcid": "$NVMF_PORT", 00:27:48.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.523 "hdgst": ${hdgst:-false}, 00:27:48.523 "ddgst": ${ddgst:-false} 00:27:48.523 }, 00:27:48.523 "method": "bdev_nvme_attach_controller" 00:27:48.523 } 00:27:48.523 EOF 00:27:48.523 )") 00:27:48.523 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:48.523 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.523 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.523 { 00:27:48.523 "params": { 00:27:48.523 "name": "Nvme$subsystem", 00:27:48.523 "trtype": "$TEST_TRANSPORT", 00:27:48.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.523 "adrfam": "ipv4", 00:27:48.523 "trsvcid": "$NVMF_PORT", 00:27:48.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.523 "hdgst": ${hdgst:-false}, 00:27:48.523 "ddgst": ${ddgst:-false} 00:27:48.523 }, 00:27:48.523 "method": "bdev_nvme_attach_controller" 00:27:48.523 } 00:27:48.523 EOF 00:27:48.523 )") 00:27:48.523 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:48.523 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.523 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.523 { 00:27:48.523 "params": { 00:27:48.523 "name": "Nvme$subsystem", 00:27:48.523 "trtype": "$TEST_TRANSPORT", 00:27:48.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.523 "adrfam": "ipv4", 00:27:48.523 "trsvcid": "$NVMF_PORT", 00:27:48.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.523 "hdgst": ${hdgst:-false}, 00:27:48.523 "ddgst": ${ddgst:-false} 00:27:48.523 }, 00:27:48.523 "method": "bdev_nvme_attach_controller" 00:27:48.523 } 00:27:48.523 EOF 00:27:48.523 )") 00:27:48.523 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:48.523 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:48.523 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:48.523 01:13:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:48.523 "params": { 00:27:48.523 "name": "Nvme1", 00:27:48.523 "trtype": "tcp", 00:27:48.523 "traddr": "10.0.0.2", 00:27:48.523 "adrfam": "ipv4", 00:27:48.523 "trsvcid": "4420", 00:27:48.523 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:48.523 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:48.523 "hdgst": false, 00:27:48.523 "ddgst": false 00:27:48.523 }, 00:27:48.523 "method": "bdev_nvme_attach_controller" 00:27:48.523 },{ 00:27:48.523 "params": { 00:27:48.523 "name": "Nvme2", 00:27:48.523 "trtype": "tcp", 00:27:48.523 "traddr": "10.0.0.2", 00:27:48.523 "adrfam": "ipv4", 00:27:48.523 "trsvcid": "4420", 00:27:48.523 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:48.523 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:48.523 "hdgst": false, 00:27:48.523 "ddgst": false 00:27:48.523 }, 00:27:48.523 "method": "bdev_nvme_attach_controller" 00:27:48.523 },{ 00:27:48.523 "params": { 00:27:48.523 "name": "Nvme3", 00:27:48.523 "trtype": "tcp", 00:27:48.523 "traddr": "10.0.0.2", 00:27:48.523 "adrfam": "ipv4", 00:27:48.523 "trsvcid": "4420", 00:27:48.523 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:48.523 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:48.523 "hdgst": false, 00:27:48.523 "ddgst": false 00:27:48.523 }, 00:27:48.523 "method": "bdev_nvme_attach_controller" 00:27:48.523 },{ 00:27:48.523 "params": { 00:27:48.523 "name": "Nvme4", 00:27:48.523 "trtype": "tcp", 00:27:48.523 "traddr": "10.0.0.2", 00:27:48.523 "adrfam": "ipv4", 00:27:48.523 "trsvcid": "4420", 00:27:48.523 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:48.523 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:48.523 "hdgst": false, 00:27:48.523 "ddgst": false 00:27:48.523 }, 00:27:48.523 "method": "bdev_nvme_attach_controller" 00:27:48.523 },{ 00:27:48.523 "params": { 00:27:48.523 "name": "Nvme5", 00:27:48.523 "trtype": "tcp", 00:27:48.523 "traddr": "10.0.0.2", 00:27:48.523 "adrfam": "ipv4", 00:27:48.523 "trsvcid": "4420", 00:27:48.523 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:48.523 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:48.523 "hdgst": false, 00:27:48.523 "ddgst": false 00:27:48.523 }, 00:27:48.523 "method": "bdev_nvme_attach_controller" 00:27:48.523 },{ 00:27:48.523 "params": { 00:27:48.523 "name": "Nvme6", 00:27:48.523 "trtype": "tcp", 00:27:48.523 "traddr": "10.0.0.2", 00:27:48.523 "adrfam": "ipv4", 00:27:48.523 "trsvcid": "4420", 00:27:48.523 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:48.523 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:48.523 "hdgst": false, 00:27:48.523 "ddgst": false 00:27:48.523 }, 00:27:48.523 "method": "bdev_nvme_attach_controller" 00:27:48.523 },{ 00:27:48.523 "params": { 00:27:48.523 "name": "Nvme7", 00:27:48.523 "trtype": "tcp", 00:27:48.523 "traddr": "10.0.0.2", 00:27:48.523 "adrfam": "ipv4", 00:27:48.523 "trsvcid": "4420", 00:27:48.523 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:48.523 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:48.523 "hdgst": false, 00:27:48.523 "ddgst": false 00:27:48.523 }, 00:27:48.523 "method": "bdev_nvme_attach_controller" 00:27:48.523 },{ 00:27:48.523 "params": { 00:27:48.523 "name": "Nvme8", 00:27:48.523 "trtype": "tcp", 00:27:48.523 "traddr": "10.0.0.2", 00:27:48.523 "adrfam": "ipv4", 00:27:48.523 "trsvcid": "4420", 00:27:48.523 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:48.523 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:48.523 "hdgst": false, 00:27:48.523 "ddgst": false 00:27:48.523 }, 00:27:48.523 "method": "bdev_nvme_attach_controller" 00:27:48.523 },{ 00:27:48.523 "params": { 00:27:48.523 "name": "Nvme9", 00:27:48.523 "trtype": "tcp", 00:27:48.523 "traddr": "10.0.0.2", 00:27:48.523 "adrfam": "ipv4", 00:27:48.523 "trsvcid": "4420", 00:27:48.523 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:48.523 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:48.523 "hdgst": false, 00:27:48.523 "ddgst": false 00:27:48.523 }, 00:27:48.523 "method": "bdev_nvme_attach_controller" 00:27:48.523 },{ 00:27:48.523 "params": { 00:27:48.523 "name": "Nvme10", 00:27:48.523 "trtype": "tcp", 00:27:48.523 "traddr": "10.0.0.2", 00:27:48.523 "adrfam": "ipv4", 00:27:48.523 "trsvcid": "4420", 00:27:48.523 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:48.523 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:48.523 "hdgst": false, 00:27:48.523 "ddgst": false 00:27:48.523 }, 00:27:48.523 "method": "bdev_nvme_attach_controller" 00:27:48.523 }' 00:27:48.523 [2024-07-14 01:13:37.892636] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:48.523 [2024-07-14 01:13:37.892724] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:48.523 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.782 [2024-07-14 01:13:37.956964] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.782 [2024-07-14 01:13:38.043470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.681 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:50.681 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:50.681 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:50.681 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.681 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:50.681 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.681 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1226637 00:27:50.681 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:50.681 01:13:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:51.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1226637 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:51.649 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1226462 00:27:51.649 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:51.649 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:51.649 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:51.649 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:51.649 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.649 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.649 { 00:27:51.649 "params": { 00:27:51.649 "name": "Nvme$subsystem", 00:27:51.649 "trtype": "$TEST_TRANSPORT", 00:27:51.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.649 "adrfam": "ipv4", 00:27:51.649 "trsvcid": "$NVMF_PORT", 00:27:51.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.649 "hdgst": ${hdgst:-false}, 00:27:51.649 "ddgst": ${ddgst:-false} 00:27:51.649 }, 00:27:51.649 "method": "bdev_nvme_attach_controller" 00:27:51.649 } 00:27:51.649 EOF 00:27:51.649 )") 00:27:51.649 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.649 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.649 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.649 { 00:27:51.649 "params": { 00:27:51.649 "name": "Nvme$subsystem", 00:27:51.649 "trtype": "$TEST_TRANSPORT", 00:27:51.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.649 "adrfam": "ipv4", 00:27:51.649 "trsvcid": "$NVMF_PORT", 00:27:51.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.649 "hdgst": ${hdgst:-false}, 00:27:51.649 "ddgst": ${ddgst:-false} 00:27:51.649 }, 00:27:51.649 "method": "bdev_nvme_attach_controller" 00:27:51.650 } 00:27:51.650 EOF 00:27:51.650 )") 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.650 { 00:27:51.650 "params": { 00:27:51.650 "name": "Nvme$subsystem", 00:27:51.650 "trtype": "$TEST_TRANSPORT", 00:27:51.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.650 "adrfam": "ipv4", 00:27:51.650 "trsvcid": "$NVMF_PORT", 00:27:51.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.650 "hdgst": ${hdgst:-false}, 00:27:51.650 "ddgst": ${ddgst:-false} 00:27:51.650 }, 00:27:51.650 "method": "bdev_nvme_attach_controller" 00:27:51.650 } 00:27:51.650 EOF 00:27:51.650 )") 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.650 { 00:27:51.650 "params": { 00:27:51.650 "name": "Nvme$subsystem", 00:27:51.650 "trtype": "$TEST_TRANSPORT", 00:27:51.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.650 "adrfam": "ipv4", 00:27:51.650 "trsvcid": "$NVMF_PORT", 00:27:51.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.650 "hdgst": ${hdgst:-false}, 00:27:51.650 "ddgst": ${ddgst:-false} 00:27:51.650 }, 00:27:51.650 "method": "bdev_nvme_attach_controller" 00:27:51.650 } 00:27:51.650 EOF 00:27:51.650 )") 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.650 { 00:27:51.650 "params": { 00:27:51.650 "name": "Nvme$subsystem", 00:27:51.650 "trtype": "$TEST_TRANSPORT", 00:27:51.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.650 "adrfam": "ipv4", 00:27:51.650 "trsvcid": "$NVMF_PORT", 00:27:51.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.650 "hdgst": ${hdgst:-false}, 00:27:51.650 "ddgst": ${ddgst:-false} 00:27:51.650 }, 00:27:51.650 "method": "bdev_nvme_attach_controller" 00:27:51.650 } 00:27:51.650 EOF 00:27:51.650 )") 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.650 { 00:27:51.650 "params": { 00:27:51.650 "name": "Nvme$subsystem", 00:27:51.650 "trtype": "$TEST_TRANSPORT", 00:27:51.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.650 "adrfam": "ipv4", 00:27:51.650 "trsvcid": "$NVMF_PORT", 00:27:51.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.650 "hdgst": ${hdgst:-false}, 00:27:51.650 "ddgst": ${ddgst:-false} 00:27:51.650 }, 00:27:51.650 "method": "bdev_nvme_attach_controller" 00:27:51.650 } 00:27:51.650 EOF 00:27:51.650 )") 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.650 { 00:27:51.650 "params": { 00:27:51.650 "name": "Nvme$subsystem", 00:27:51.650 "trtype": "$TEST_TRANSPORT", 00:27:51.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.650 "adrfam": "ipv4", 00:27:51.650 "trsvcid": "$NVMF_PORT", 00:27:51.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.650 "hdgst": ${hdgst:-false}, 00:27:51.650 "ddgst": ${ddgst:-false} 00:27:51.650 }, 00:27:51.650 "method": "bdev_nvme_attach_controller" 00:27:51.650 } 00:27:51.650 EOF 00:27:51.650 )") 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.650 { 00:27:51.650 "params": { 00:27:51.650 "name": "Nvme$subsystem", 00:27:51.650 "trtype": "$TEST_TRANSPORT", 00:27:51.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.650 "adrfam": "ipv4", 00:27:51.650 "trsvcid": "$NVMF_PORT", 00:27:51.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.650 "hdgst": ${hdgst:-false}, 00:27:51.650 "ddgst": ${ddgst:-false} 00:27:51.650 }, 00:27:51.650 "method": "bdev_nvme_attach_controller" 00:27:51.650 } 00:27:51.650 EOF 00:27:51.650 )") 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.650 { 00:27:51.650 "params": { 00:27:51.650 "name": "Nvme$subsystem", 00:27:51.650 "trtype": "$TEST_TRANSPORT", 00:27:51.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.650 "adrfam": "ipv4", 00:27:51.650 "trsvcid": "$NVMF_PORT", 00:27:51.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.650 "hdgst": ${hdgst:-false}, 00:27:51.650 "ddgst": ${ddgst:-false} 00:27:51.650 }, 00:27:51.650 "method": "bdev_nvme_attach_controller" 00:27:51.650 } 00:27:51.650 EOF 00:27:51.650 )") 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.650 { 00:27:51.650 "params": { 00:27:51.650 "name": "Nvme$subsystem", 00:27:51.650 "trtype": "$TEST_TRANSPORT", 00:27:51.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.650 "adrfam": "ipv4", 00:27:51.650 "trsvcid": "$NVMF_PORT", 00:27:51.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.650 "hdgst": ${hdgst:-false}, 00:27:51.650 "ddgst": ${ddgst:-false} 00:27:51.650 }, 00:27:51.650 "method": "bdev_nvme_attach_controller" 00:27:51.650 } 00:27:51.650 EOF 00:27:51.650 )") 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:51.650 01:13:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:51.650 "params": { 00:27:51.650 "name": "Nvme1", 00:27:51.650 "trtype": "tcp", 00:27:51.650 "traddr": "10.0.0.2", 00:27:51.650 "adrfam": "ipv4", 00:27:51.650 "trsvcid": "4420", 00:27:51.650 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:51.650 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:51.650 "hdgst": false, 00:27:51.650 "ddgst": false 00:27:51.650 }, 00:27:51.650 "method": "bdev_nvme_attach_controller" 00:27:51.650 },{ 00:27:51.650 "params": { 00:27:51.650 "name": "Nvme2", 00:27:51.650 "trtype": "tcp", 00:27:51.650 "traddr": "10.0.0.2", 00:27:51.650 "adrfam": "ipv4", 00:27:51.650 "trsvcid": "4420", 00:27:51.650 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:51.650 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:51.650 "hdgst": false, 00:27:51.650 "ddgst": false 00:27:51.650 }, 00:27:51.650 "method": "bdev_nvme_attach_controller" 00:27:51.650 },{ 00:27:51.650 "params": { 00:27:51.650 "name": "Nvme3", 00:27:51.650 "trtype": "tcp", 00:27:51.650 "traddr": "10.0.0.2", 00:27:51.650 "adrfam": "ipv4", 00:27:51.650 "trsvcid": "4420", 00:27:51.650 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:51.650 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:51.650 "hdgst": false, 00:27:51.650 "ddgst": false 00:27:51.650 }, 00:27:51.650 "method": "bdev_nvme_attach_controller" 00:27:51.650 },{ 00:27:51.650 "params": { 00:27:51.650 "name": "Nvme4", 00:27:51.650 "trtype": "tcp", 00:27:51.650 "traddr": "10.0.0.2", 00:27:51.650 "adrfam": "ipv4", 00:27:51.650 "trsvcid": "4420", 00:27:51.650 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:51.650 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:51.650 "hdgst": false, 00:27:51.650 "ddgst": false 00:27:51.650 }, 00:27:51.650 "method": "bdev_nvme_attach_controller" 00:27:51.650 },{ 00:27:51.650 "params": { 00:27:51.650 "name": "Nvme5", 00:27:51.650 "trtype": "tcp", 00:27:51.650 "traddr": "10.0.0.2", 00:27:51.650 "adrfam": "ipv4", 00:27:51.650 "trsvcid": "4420", 00:27:51.650 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:51.650 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:51.650 "hdgst": false, 00:27:51.650 "ddgst": false 00:27:51.650 }, 00:27:51.650 "method": "bdev_nvme_attach_controller" 00:27:51.650 },{ 00:27:51.650 "params": { 00:27:51.650 "name": "Nvme6", 00:27:51.650 "trtype": "tcp", 00:27:51.650 "traddr": "10.0.0.2", 00:27:51.650 "adrfam": "ipv4", 00:27:51.650 "trsvcid": "4420", 00:27:51.650 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:51.650 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:51.651 "hdgst": false, 00:27:51.651 "ddgst": false 00:27:51.651 }, 00:27:51.651 "method": "bdev_nvme_attach_controller" 00:27:51.651 },{ 00:27:51.651 "params": { 00:27:51.651 "name": "Nvme7", 00:27:51.651 "trtype": "tcp", 00:27:51.651 "traddr": "10.0.0.2", 00:27:51.651 "adrfam": "ipv4", 00:27:51.651 "trsvcid": "4420", 00:27:51.651 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:51.651 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:51.651 "hdgst": false, 00:27:51.651 "ddgst": false 00:27:51.651 }, 00:27:51.651 "method": "bdev_nvme_attach_controller" 00:27:51.651 },{ 00:27:51.651 "params": { 00:27:51.651 "name": "Nvme8", 00:27:51.651 "trtype": "tcp", 00:27:51.651 "traddr": "10.0.0.2", 00:27:51.651 "adrfam": "ipv4", 00:27:51.651 "trsvcid": "4420", 00:27:51.651 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:51.651 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:51.651 "hdgst": false, 00:27:51.651 "ddgst": false 00:27:51.651 }, 00:27:51.651 "method": "bdev_nvme_attach_controller" 00:27:51.651 },{ 00:27:51.651 "params": { 00:27:51.651 "name": "Nvme9", 00:27:51.651 "trtype": "tcp", 00:27:51.651 "traddr": "10.0.0.2", 00:27:51.651 "adrfam": "ipv4", 00:27:51.651 "trsvcid": "4420", 00:27:51.651 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:51.651 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:51.651 "hdgst": false, 00:27:51.651 "ddgst": false 00:27:51.651 }, 00:27:51.651 "method": "bdev_nvme_attach_controller" 00:27:51.651 },{ 00:27:51.651 "params": { 00:27:51.651 "name": "Nvme10", 00:27:51.651 "trtype": "tcp", 00:27:51.651 "traddr": "10.0.0.2", 00:27:51.651 "adrfam": "ipv4", 00:27:51.651 "trsvcid": "4420", 00:27:51.651 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:51.651 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:51.651 "hdgst": false, 00:27:51.651 "ddgst": false 00:27:51.651 }, 00:27:51.651 "method": "bdev_nvme_attach_controller" 00:27:51.651 }' 00:27:51.651 [2024-07-14 01:13:40.904258] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:51.651 [2024-07-14 01:13:40.904353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227060 ] 00:27:51.651 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.651 [2024-07-14 01:13:40.968407] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.651 [2024-07-14 01:13:41.055059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.026 Running I/O for 1 seconds... 00:27:54.400 00:27:54.401 Latency(us) 00:27:54.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:54.401 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.401 Verification LBA range: start 0x0 length 0x400 00:27:54.401 Nvme1n1 : 1.13 170.16 10.63 0.00 0.00 365358.65 44273.21 284280.60 00:27:54.401 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.401 Verification LBA range: start 0x0 length 0x400 00:27:54.401 Nvme2n1 : 1.02 250.03 15.63 0.00 0.00 248630.42 17961.72 250104.79 00:27:54.401 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.401 Verification LBA range: start 0x0 length 0x400 00:27:54.401 Nvme3n1 : 1.14 224.25 14.02 0.00 0.00 273483.66 21554.06 264085.81 00:27:54.401 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.401 Verification LBA range: start 0x0 length 0x400 00:27:54.401 Nvme4n1 : 1.17 272.91 17.06 0.00 0.00 221124.99 17670.45 248551.35 00:27:54.401 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.401 Verification LBA range: start 0x0 length 0x400 00:27:54.401 Nvme5n1 : 1.19 214.33 13.40 0.00 0.00 277353.24 21845.33 279620.27 00:27:54.401 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.401 Verification LBA range: start 0x0 length 0x400 00:27:54.401 Nvme6n1 : 1.20 267.72 16.73 0.00 0.00 217564.05 18447.17 229910.00 00:27:54.401 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.401 Verification LBA range: start 0x0 length 0x400 00:27:54.401 Nvme7n1 : 1.18 216.88 13.56 0.00 0.00 265093.12 30098.01 256318.58 00:27:54.401 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.401 Verification LBA range: start 0x0 length 0x400 00:27:54.401 Nvme8n1 : 1.20 266.69 16.67 0.00 0.00 212463.77 16311.18 270299.59 00:27:54.401 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.401 Verification LBA range: start 0x0 length 0x400 00:27:54.401 Nvme9n1 : 1.19 219.53 13.72 0.00 0.00 252614.00 4733.16 296708.17 00:27:54.401 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:54.401 Verification LBA range: start 0x0 length 0x400 00:27:54.401 Nvme10n1 : 1.21 264.66 16.54 0.00 0.00 207177.96 17961.72 254765.13 00:27:54.401 =================================================================================================================== 00:27:54.401 Total : 2367.17 147.95 0.00 0.00 247834.32 4733.16 296708.17 00:27:54.401 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:54.401 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:54.401 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:54.401 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:54.401 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:54.401 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:54.401 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:54.401 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:54.401 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:54.401 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:54.401 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:54.401 rmmod nvme_tcp 00:27:54.401 rmmod nvme_fabrics 00:27:54.401 rmmod nvme_keyring 00:27:54.401 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:54.401 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:54.401 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:54.401 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1226462 ']' 00:27:54.401 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1226462 00:27:54.401 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1226462 ']' 00:27:54.401 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1226462 00:27:54.660 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:27:54.660 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:54.660 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1226462 00:27:54.660 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:54.660 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:54.660 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1226462' 00:27:54.660 killing process with pid 1226462 00:27:54.660 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1226462 00:27:54.660 01:13:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1226462 00:27:55.232 01:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:55.232 01:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:55.232 01:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:55.232 01:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:55.232 01:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:55.232 01:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.232 01:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:55.232 01:13:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:57.145 00:27:57.145 real 0m11.568s 00:27:57.145 user 0m32.746s 00:27:57.145 sys 0m3.231s 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:57.145 ************************************ 00:27:57.145 END TEST nvmf_shutdown_tc1 00:27:57.145 ************************************ 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:57.145 ************************************ 00:27:57.145 START TEST nvmf_shutdown_tc2 00:27:57.145 ************************************ 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:57.145 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:57.145 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:57.145 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:57.145 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:57.146 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:57.146 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:57.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:57.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:27:57.407 00:27:57.407 --- 10.0.0.2 ping statistics --- 00:27:57.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.407 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:57.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:57.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:27:57.407 00:27:57.407 --- 10.0.0.1 ping statistics --- 00:27:57.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.407 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1227811 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1227811 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1227811 ']' 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:57.407 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.407 [2024-07-14 01:13:46.669296] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:57.407 [2024-07-14 01:13:46.669376] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.407 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.407 [2024-07-14 01:13:46.747598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:57.668 [2024-07-14 01:13:46.855559] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:57.668 [2024-07-14 01:13:46.855621] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:57.668 [2024-07-14 01:13:46.855637] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:57.668 [2024-07-14 01:13:46.855651] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:57.668 [2024-07-14 01:13:46.855662] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:57.668 [2024-07-14 01:13:46.855723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:57.668 [2024-07-14 01:13:46.855763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:57.668 [2024-07-14 01:13:46.855843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:57.668 [2024-07-14 01:13:46.855845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.668 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:57.668 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:57.668 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:57.668 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.668 01:13:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.668 [2024-07-14 01:13:47.008829] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.668 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:57.668 Malloc1 00:27:57.927 [2024-07-14 01:13:47.098697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.927 Malloc2 00:27:57.927 Malloc3 00:27:57.927 Malloc4 00:27:57.927 Malloc5 00:27:57.927 Malloc6 00:27:58.186 Malloc7 00:27:58.186 Malloc8 00:27:58.186 Malloc9 00:27:58.186 Malloc10 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1227885 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1227885 /var/tmp/bdevperf.sock 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1227885 ']' 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:58.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.186 { 00:27:58.186 "params": { 00:27:58.186 "name": "Nvme$subsystem", 00:27:58.186 "trtype": "$TEST_TRANSPORT", 00:27:58.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.186 "adrfam": "ipv4", 00:27:58.186 "trsvcid": "$NVMF_PORT", 00:27:58.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.186 "hdgst": ${hdgst:-false}, 00:27:58.186 "ddgst": ${ddgst:-false} 00:27:58.186 }, 00:27:58.186 "method": "bdev_nvme_attach_controller" 00:27:58.186 } 00:27:58.186 EOF 00:27:58.186 )") 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.186 { 00:27:58.186 "params": { 00:27:58.186 "name": "Nvme$subsystem", 00:27:58.186 "trtype": "$TEST_TRANSPORT", 00:27:58.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.186 "adrfam": "ipv4", 00:27:58.186 "trsvcid": "$NVMF_PORT", 00:27:58.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.186 "hdgst": ${hdgst:-false}, 00:27:58.186 "ddgst": ${ddgst:-false} 00:27:58.186 }, 00:27:58.186 "method": "bdev_nvme_attach_controller" 00:27:58.186 } 00:27:58.186 EOF 00:27:58.186 )") 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.186 { 00:27:58.186 "params": { 00:27:58.186 "name": "Nvme$subsystem", 00:27:58.186 "trtype": "$TEST_TRANSPORT", 00:27:58.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.186 "adrfam": "ipv4", 00:27:58.186 "trsvcid": "$NVMF_PORT", 00:27:58.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.186 "hdgst": ${hdgst:-false}, 00:27:58.186 "ddgst": ${ddgst:-false} 00:27:58.186 }, 00:27:58.186 "method": "bdev_nvme_attach_controller" 00:27:58.186 } 00:27:58.186 EOF 00:27:58.186 )") 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.186 { 00:27:58.186 "params": { 00:27:58.186 "name": "Nvme$subsystem", 00:27:58.186 "trtype": "$TEST_TRANSPORT", 00:27:58.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.186 "adrfam": "ipv4", 00:27:58.186 "trsvcid": "$NVMF_PORT", 00:27:58.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.186 "hdgst": ${hdgst:-false}, 00:27:58.186 "ddgst": ${ddgst:-false} 00:27:58.186 }, 00:27:58.186 "method": "bdev_nvme_attach_controller" 00:27:58.186 } 00:27:58.186 EOF 00:27:58.186 )") 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.186 { 00:27:58.186 "params": { 00:27:58.186 "name": "Nvme$subsystem", 00:27:58.186 "trtype": "$TEST_TRANSPORT", 00:27:58.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.186 "adrfam": "ipv4", 00:27:58.186 "trsvcid": "$NVMF_PORT", 00:27:58.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.186 "hdgst": ${hdgst:-false}, 00:27:58.186 "ddgst": ${ddgst:-false} 00:27:58.186 }, 00:27:58.186 "method": "bdev_nvme_attach_controller" 00:27:58.186 } 00:27:58.186 EOF 00:27:58.186 )") 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.186 { 00:27:58.186 "params": { 00:27:58.186 "name": "Nvme$subsystem", 00:27:58.186 "trtype": "$TEST_TRANSPORT", 00:27:58.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.186 "adrfam": "ipv4", 00:27:58.186 "trsvcid": "$NVMF_PORT", 00:27:58.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.186 "hdgst": ${hdgst:-false}, 00:27:58.186 "ddgst": ${ddgst:-false} 00:27:58.186 }, 00:27:58.186 "method": "bdev_nvme_attach_controller" 00:27:58.186 } 00:27:58.186 EOF 00:27:58.186 )") 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.186 { 00:27:58.186 "params": { 00:27:58.186 "name": "Nvme$subsystem", 00:27:58.186 "trtype": "$TEST_TRANSPORT", 00:27:58.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.186 "adrfam": "ipv4", 00:27:58.186 "trsvcid": "$NVMF_PORT", 00:27:58.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.186 "hdgst": ${hdgst:-false}, 00:27:58.186 "ddgst": ${ddgst:-false} 00:27:58.186 }, 00:27:58.186 "method": "bdev_nvme_attach_controller" 00:27:58.186 } 00:27:58.186 EOF 00:27:58.186 )") 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.186 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.186 { 00:27:58.186 "params": { 00:27:58.186 "name": "Nvme$subsystem", 00:27:58.186 "trtype": "$TEST_TRANSPORT", 00:27:58.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.186 "adrfam": "ipv4", 00:27:58.186 "trsvcid": "$NVMF_PORT", 00:27:58.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.187 "hdgst": ${hdgst:-false}, 00:27:58.187 "ddgst": ${ddgst:-false} 00:27:58.187 }, 00:27:58.187 "method": "bdev_nvme_attach_controller" 00:27:58.187 } 00:27:58.187 EOF 00:27:58.187 )") 00:27:58.187 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:58.187 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.187 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.187 { 00:27:58.187 "params": { 00:27:58.187 "name": "Nvme$subsystem", 00:27:58.187 "trtype": "$TEST_TRANSPORT", 00:27:58.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.187 "adrfam": "ipv4", 00:27:58.187 "trsvcid": "$NVMF_PORT", 00:27:58.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.187 "hdgst": ${hdgst:-false}, 00:27:58.187 "ddgst": ${ddgst:-false} 00:27:58.187 }, 00:27:58.187 "method": "bdev_nvme_attach_controller" 00:27:58.187 } 00:27:58.187 EOF 00:27:58.187 )") 00:27:58.447 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:58.448 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:58.448 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:58.448 { 00:27:58.448 "params": { 00:27:58.448 "name": "Nvme$subsystem", 00:27:58.448 "trtype": "$TEST_TRANSPORT", 00:27:58.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:58.448 "adrfam": "ipv4", 00:27:58.448 "trsvcid": "$NVMF_PORT", 00:27:58.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:58.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:58.448 "hdgst": ${hdgst:-false}, 00:27:58.448 "ddgst": ${ddgst:-false} 00:27:58.448 }, 00:27:58.448 "method": "bdev_nvme_attach_controller" 00:27:58.448 } 00:27:58.448 EOF 00:27:58.448 )") 00:27:58.448 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:58.448 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:58.448 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:58.448 01:13:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:58.448 "params": { 00:27:58.448 "name": "Nvme1", 00:27:58.448 "trtype": "tcp", 00:27:58.448 "traddr": "10.0.0.2", 00:27:58.448 "adrfam": "ipv4", 00:27:58.448 "trsvcid": "4420", 00:27:58.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:58.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:58.448 "hdgst": false, 00:27:58.448 "ddgst": false 00:27:58.448 }, 00:27:58.448 "method": "bdev_nvme_attach_controller" 00:27:58.448 },{ 00:27:58.448 "params": { 00:27:58.448 "name": "Nvme2", 00:27:58.448 "trtype": "tcp", 00:27:58.448 "traddr": "10.0.0.2", 00:27:58.448 "adrfam": "ipv4", 00:27:58.448 "trsvcid": "4420", 00:27:58.448 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:58.448 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:58.448 "hdgst": false, 00:27:58.448 "ddgst": false 00:27:58.448 }, 00:27:58.448 "method": "bdev_nvme_attach_controller" 00:27:58.448 },{ 00:27:58.448 "params": { 00:27:58.448 "name": "Nvme3", 00:27:58.448 "trtype": "tcp", 00:27:58.448 "traddr": "10.0.0.2", 00:27:58.448 "adrfam": "ipv4", 00:27:58.448 "trsvcid": "4420", 00:27:58.448 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:58.448 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:58.448 "hdgst": false, 00:27:58.448 "ddgst": false 00:27:58.448 }, 00:27:58.448 "method": "bdev_nvme_attach_controller" 00:27:58.448 },{ 00:27:58.448 "params": { 00:27:58.448 "name": "Nvme4", 00:27:58.448 "trtype": "tcp", 00:27:58.448 "traddr": "10.0.0.2", 00:27:58.448 "adrfam": "ipv4", 00:27:58.448 "trsvcid": "4420", 00:27:58.448 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:58.448 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:58.448 "hdgst": false, 00:27:58.448 "ddgst": false 00:27:58.448 }, 00:27:58.448 "method": "bdev_nvme_attach_controller" 00:27:58.448 },{ 00:27:58.448 "params": { 00:27:58.448 "name": "Nvme5", 00:27:58.448 "trtype": "tcp", 00:27:58.448 "traddr": "10.0.0.2", 00:27:58.448 "adrfam": "ipv4", 00:27:58.448 "trsvcid": "4420", 00:27:58.448 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:58.448 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:58.448 "hdgst": false, 00:27:58.448 "ddgst": false 00:27:58.448 }, 00:27:58.448 "method": "bdev_nvme_attach_controller" 00:27:58.448 },{ 00:27:58.448 "params": { 00:27:58.448 "name": "Nvme6", 00:27:58.448 "trtype": "tcp", 00:27:58.448 "traddr": "10.0.0.2", 00:27:58.448 "adrfam": "ipv4", 00:27:58.448 "trsvcid": "4420", 00:27:58.448 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:58.448 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:58.448 "hdgst": false, 00:27:58.448 "ddgst": false 00:27:58.448 }, 00:27:58.448 "method": "bdev_nvme_attach_controller" 00:27:58.448 },{ 00:27:58.448 "params": { 00:27:58.448 "name": "Nvme7", 00:27:58.448 "trtype": "tcp", 00:27:58.448 "traddr": "10.0.0.2", 00:27:58.448 "adrfam": "ipv4", 00:27:58.448 "trsvcid": "4420", 00:27:58.448 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:58.448 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:58.448 "hdgst": false, 00:27:58.448 "ddgst": false 00:27:58.448 }, 00:27:58.448 "method": "bdev_nvme_attach_controller" 00:27:58.448 },{ 00:27:58.448 "params": { 00:27:58.448 "name": "Nvme8", 00:27:58.448 "trtype": "tcp", 00:27:58.448 "traddr": "10.0.0.2", 00:27:58.448 "adrfam": "ipv4", 00:27:58.448 "trsvcid": "4420", 00:27:58.448 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:58.448 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:58.448 "hdgst": false, 00:27:58.448 "ddgst": false 00:27:58.448 }, 00:27:58.448 "method": "bdev_nvme_attach_controller" 00:27:58.448 },{ 00:27:58.448 "params": { 00:27:58.448 "name": "Nvme9", 00:27:58.448 "trtype": "tcp", 00:27:58.448 "traddr": "10.0.0.2", 00:27:58.448 "adrfam": "ipv4", 00:27:58.448 "trsvcid": "4420", 00:27:58.448 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:58.448 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:58.448 "hdgst": false, 00:27:58.448 "ddgst": false 00:27:58.448 }, 00:27:58.448 "method": "bdev_nvme_attach_controller" 00:27:58.448 },{ 00:27:58.448 "params": { 00:27:58.448 "name": "Nvme10", 00:27:58.448 "trtype": "tcp", 00:27:58.448 "traddr": "10.0.0.2", 00:27:58.448 "adrfam": "ipv4", 00:27:58.448 "trsvcid": "4420", 00:27:58.448 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:58.448 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:58.448 "hdgst": false, 00:27:58.448 "ddgst": false 00:27:58.448 }, 00:27:58.448 "method": "bdev_nvme_attach_controller" 00:27:58.448 }' 00:27:58.448 [2024-07-14 01:13:47.615524] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:58.448 [2024-07-14 01:13:47.615604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227885 ] 00:27:58.448 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.448 [2024-07-14 01:13:47.681213] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.448 [2024-07-14 01:13:47.768334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.355 Running I/O for 10 seconds... 00:28:00.355 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:00.355 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:28:00.355 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:00.355 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.355 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.615 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.615 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:00.615 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:00.615 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:00.615 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:00.615 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:00.615 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:00.615 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:00.615 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:00.615 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:00.615 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.615 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.615 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.615 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:00.615 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:00.615 01:13:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:00.875 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:00.875 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:00.875 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:00.875 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:00.875 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.875 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.875 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.875 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:00.875 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:00.875 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1227885 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1227885 ']' 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1227885 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1227885 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1227885' 00:28:01.135 killing process with pid 1227885 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1227885 00:28:01.135 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1227885 00:28:01.394 Received shutdown signal, test time was about 1.063493 seconds 00:28:01.394 00:28:01.394 Latency(us) 00:28:01.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.394 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.394 Verification LBA range: start 0x0 length 0x400 00:28:01.394 Nvme1n1 : 1.03 249.20 15.57 0.00 0.00 253162.38 19903.53 273406.48 00:28:01.394 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.394 Verification LBA range: start 0x0 length 0x400 00:28:01.394 Nvme2n1 : 1.00 191.87 11.99 0.00 0.00 320130.97 23301.69 265639.25 00:28:01.394 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.394 Verification LBA range: start 0x0 length 0x400 00:28:01.394 Nvme3n1 : 1.02 251.83 15.74 0.00 0.00 237760.28 18738.44 248551.35 00:28:01.394 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.394 Verification LBA range: start 0x0 length 0x400 00:28:01.394 Nvme4n1 : 1.02 251.00 15.69 0.00 0.00 232162.80 20000.62 250104.79 00:28:01.394 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.395 Verification LBA range: start 0x0 length 0x400 00:28:01.395 Nvme5n1 : 0.99 193.63 12.10 0.00 0.00 291923.44 22233.69 253211.69 00:28:01.395 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.395 Verification LBA range: start 0x0 length 0x400 00:28:01.395 Nvme6n1 : 1.01 189.38 11.84 0.00 0.00 290790.46 24078.41 271853.04 00:28:01.395 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.395 Verification LBA range: start 0x0 length 0x400 00:28:01.395 Nvme7n1 : 1.03 248.21 15.51 0.00 0.00 215936.38 21942.42 271853.04 00:28:01.395 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.395 Verification LBA range: start 0x0 length 0x400 00:28:01.395 Nvme8n1 : 0.98 195.07 12.19 0.00 0.00 263595.30 22816.24 254765.13 00:28:01.395 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.395 Verification LBA range: start 0x0 length 0x400 00:28:01.395 Nvme9n1 : 1.01 190.82 11.93 0.00 0.00 262855.49 21554.06 256318.58 00:28:01.395 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:01.395 Verification LBA range: start 0x0 length 0x400 00:28:01.395 Nvme10n1 : 1.06 180.68 11.29 0.00 0.00 260717.23 23592.96 298261.62 00:28:01.395 =================================================================================================================== 00:28:01.395 Total : 2141.68 133.86 0.00 0.00 259591.94 18738.44 298261.62 00:28:01.395 01:13:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1227811 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:02.772 rmmod nvme_tcp 00:28:02.772 rmmod nvme_fabrics 00:28:02.772 rmmod nvme_keyring 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1227811 ']' 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1227811 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1227811 ']' 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1227811 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1227811 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1227811' 00:28:02.772 killing process with pid 1227811 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1227811 00:28:02.772 01:13:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1227811 00:28:03.032 01:13:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:03.032 01:13:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:03.032 01:13:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:03.032 01:13:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:03.032 01:13:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:03.032 01:13:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.032 01:13:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.032 01:13:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:05.576 00:28:05.576 real 0m7.981s 00:28:05.576 user 0m24.470s 00:28:05.576 sys 0m1.599s 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:05.576 ************************************ 00:28:05.576 END TEST nvmf_shutdown_tc2 00:28:05.576 ************************************ 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:05.576 ************************************ 00:28:05.576 START TEST nvmf_shutdown_tc3 00:28:05.576 ************************************ 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:05.576 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:05.576 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:05.576 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.576 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:05.577 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:05.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:05.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:28:05.577 00:28:05.577 --- 10.0.0.2 ping statistics --- 00:28:05.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.577 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:05.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:05.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:28:05.577 00:28:05.577 --- 10.0.0.1 ping statistics --- 00:28:05.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.577 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1228902 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1228902 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1228902 ']' 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:05.577 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:05.577 [2024-07-14 01:13:54.705538] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:05.577 [2024-07-14 01:13:54.705626] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.577 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.577 [2024-07-14 01:13:54.770234] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:05.577 [2024-07-14 01:13:54.859321] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.577 [2024-07-14 01:13:54.859371] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.577 [2024-07-14 01:13:54.859385] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.577 [2024-07-14 01:13:54.859396] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.577 [2024-07-14 01:13:54.859406] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.577 [2024-07-14 01:13:54.859493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:05.577 [2024-07-14 01:13:54.859558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:05.577 [2024-07-14 01:13:54.859624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:05.577 [2024-07-14 01:13:54.859627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.837 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:05.837 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:05.837 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:05.837 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:05.837 01:13:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:05.837 [2024-07-14 01:13:55.019817] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.837 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:05.837 Malloc1 00:28:05.837 [2024-07-14 01:13:55.109449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.837 Malloc2 00:28:05.837 Malloc3 00:28:05.837 Malloc4 00:28:06.097 Malloc5 00:28:06.097 Malloc6 00:28:06.097 Malloc7 00:28:06.097 Malloc8 00:28:06.097 Malloc9 00:28:06.356 Malloc10 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1229013 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1229013 /var/tmp/bdevperf.sock 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1229013 ']' 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:06.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:06.356 { 00:28:06.356 "params": { 00:28:06.356 "name": "Nvme$subsystem", 00:28:06.356 "trtype": "$TEST_TRANSPORT", 00:28:06.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.356 "adrfam": "ipv4", 00:28:06.356 "trsvcid": "$NVMF_PORT", 00:28:06.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.356 "hdgst": ${hdgst:-false}, 00:28:06.356 "ddgst": ${ddgst:-false} 00:28:06.356 }, 00:28:06.356 "method": "bdev_nvme_attach_controller" 00:28:06.356 } 00:28:06.356 EOF 00:28:06.356 )") 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:06.356 { 00:28:06.356 "params": { 00:28:06.356 "name": "Nvme$subsystem", 00:28:06.356 "trtype": "$TEST_TRANSPORT", 00:28:06.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.356 "adrfam": "ipv4", 00:28:06.356 "trsvcid": "$NVMF_PORT", 00:28:06.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.356 "hdgst": ${hdgst:-false}, 00:28:06.356 "ddgst": ${ddgst:-false} 00:28:06.356 }, 00:28:06.356 "method": "bdev_nvme_attach_controller" 00:28:06.356 } 00:28:06.356 EOF 00:28:06.356 )") 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:06.356 { 00:28:06.356 "params": { 00:28:06.356 "name": "Nvme$subsystem", 00:28:06.356 "trtype": "$TEST_TRANSPORT", 00:28:06.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.356 "adrfam": "ipv4", 00:28:06.356 "trsvcid": "$NVMF_PORT", 00:28:06.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.356 "hdgst": ${hdgst:-false}, 00:28:06.356 "ddgst": ${ddgst:-false} 00:28:06.356 }, 00:28:06.356 "method": "bdev_nvme_attach_controller" 00:28:06.356 } 00:28:06.356 EOF 00:28:06.356 )") 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:06.356 { 00:28:06.356 "params": { 00:28:06.356 "name": "Nvme$subsystem", 00:28:06.356 "trtype": "$TEST_TRANSPORT", 00:28:06.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.356 "adrfam": "ipv4", 00:28:06.356 "trsvcid": "$NVMF_PORT", 00:28:06.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.356 "hdgst": ${hdgst:-false}, 00:28:06.356 "ddgst": ${ddgst:-false} 00:28:06.356 }, 00:28:06.356 "method": "bdev_nvme_attach_controller" 00:28:06.356 } 00:28:06.356 EOF 00:28:06.356 )") 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:06.356 { 00:28:06.356 "params": { 00:28:06.356 "name": "Nvme$subsystem", 00:28:06.356 "trtype": "$TEST_TRANSPORT", 00:28:06.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.356 "adrfam": "ipv4", 00:28:06.356 "trsvcid": "$NVMF_PORT", 00:28:06.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.356 "hdgst": ${hdgst:-false}, 00:28:06.356 "ddgst": ${ddgst:-false} 00:28:06.356 }, 00:28:06.356 "method": "bdev_nvme_attach_controller" 00:28:06.356 } 00:28:06.356 EOF 00:28:06.356 )") 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:06.356 { 00:28:06.356 "params": { 00:28:06.356 "name": "Nvme$subsystem", 00:28:06.356 "trtype": "$TEST_TRANSPORT", 00:28:06.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.356 "adrfam": "ipv4", 00:28:06.356 "trsvcid": "$NVMF_PORT", 00:28:06.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.356 "hdgst": ${hdgst:-false}, 00:28:06.356 "ddgst": ${ddgst:-false} 00:28:06.356 }, 00:28:06.356 "method": "bdev_nvme_attach_controller" 00:28:06.356 } 00:28:06.356 EOF 00:28:06.356 )") 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:06.356 { 00:28:06.356 "params": { 00:28:06.356 "name": "Nvme$subsystem", 00:28:06.356 "trtype": "$TEST_TRANSPORT", 00:28:06.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.356 "adrfam": "ipv4", 00:28:06.356 "trsvcid": "$NVMF_PORT", 00:28:06.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.356 "hdgst": ${hdgst:-false}, 00:28:06.356 "ddgst": ${ddgst:-false} 00:28:06.356 }, 00:28:06.356 "method": "bdev_nvme_attach_controller" 00:28:06.356 } 00:28:06.356 EOF 00:28:06.356 )") 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:06.356 { 00:28:06.356 "params": { 00:28:06.356 "name": "Nvme$subsystem", 00:28:06.356 "trtype": "$TEST_TRANSPORT", 00:28:06.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.356 "adrfam": "ipv4", 00:28:06.356 "trsvcid": "$NVMF_PORT", 00:28:06.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.356 "hdgst": ${hdgst:-false}, 00:28:06.356 "ddgst": ${ddgst:-false} 00:28:06.356 }, 00:28:06.356 "method": "bdev_nvme_attach_controller" 00:28:06.356 } 00:28:06.356 EOF 00:28:06.356 )") 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:06.356 { 00:28:06.356 "params": { 00:28:06.356 "name": "Nvme$subsystem", 00:28:06.356 "trtype": "$TEST_TRANSPORT", 00:28:06.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.356 "adrfam": "ipv4", 00:28:06.356 "trsvcid": "$NVMF_PORT", 00:28:06.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.356 "hdgst": ${hdgst:-false}, 00:28:06.356 "ddgst": ${ddgst:-false} 00:28:06.356 }, 00:28:06.356 "method": "bdev_nvme_attach_controller" 00:28:06.356 } 00:28:06.356 EOF 00:28:06.356 )") 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:06.356 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:06.357 { 00:28:06.357 "params": { 00:28:06.357 "name": "Nvme$subsystem", 00:28:06.357 "trtype": "$TEST_TRANSPORT", 00:28:06.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.357 "adrfam": "ipv4", 00:28:06.357 "trsvcid": "$NVMF_PORT", 00:28:06.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.357 "hdgst": ${hdgst:-false}, 00:28:06.357 "ddgst": ${ddgst:-false} 00:28:06.357 }, 00:28:06.357 "method": "bdev_nvme_attach_controller" 00:28:06.357 } 00:28:06.357 EOF 00:28:06.357 )") 00:28:06.357 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:06.357 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:06.357 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:06.357 01:13:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:06.357 "params": { 00:28:06.357 "name": "Nvme1", 00:28:06.357 "trtype": "tcp", 00:28:06.357 "traddr": "10.0.0.2", 00:28:06.357 "adrfam": "ipv4", 00:28:06.357 "trsvcid": "4420", 00:28:06.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:06.357 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:06.357 "hdgst": false, 00:28:06.357 "ddgst": false 00:28:06.357 }, 00:28:06.357 "method": "bdev_nvme_attach_controller" 00:28:06.357 },{ 00:28:06.357 "params": { 00:28:06.357 "name": "Nvme2", 00:28:06.357 "trtype": "tcp", 00:28:06.357 "traddr": "10.0.0.2", 00:28:06.357 "adrfam": "ipv4", 00:28:06.357 "trsvcid": "4420", 00:28:06.357 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:06.357 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:06.357 "hdgst": false, 00:28:06.357 "ddgst": false 00:28:06.357 }, 00:28:06.357 "method": "bdev_nvme_attach_controller" 00:28:06.357 },{ 00:28:06.357 "params": { 00:28:06.357 "name": "Nvme3", 00:28:06.357 "trtype": "tcp", 00:28:06.357 "traddr": "10.0.0.2", 00:28:06.357 "adrfam": "ipv4", 00:28:06.357 "trsvcid": "4420", 00:28:06.357 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:06.357 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:06.357 "hdgst": false, 00:28:06.357 "ddgst": false 00:28:06.357 }, 00:28:06.357 "method": "bdev_nvme_attach_controller" 00:28:06.357 },{ 00:28:06.357 "params": { 00:28:06.357 "name": "Nvme4", 00:28:06.357 "trtype": "tcp", 00:28:06.357 "traddr": "10.0.0.2", 00:28:06.357 "adrfam": "ipv4", 00:28:06.357 "trsvcid": "4420", 00:28:06.357 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:06.357 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:06.357 "hdgst": false, 00:28:06.357 "ddgst": false 00:28:06.357 }, 00:28:06.357 "method": "bdev_nvme_attach_controller" 00:28:06.357 },{ 00:28:06.357 "params": { 00:28:06.357 "name": "Nvme5", 00:28:06.357 "trtype": "tcp", 00:28:06.357 "traddr": "10.0.0.2", 00:28:06.357 "adrfam": "ipv4", 00:28:06.357 "trsvcid": "4420", 00:28:06.357 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:06.357 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:06.357 "hdgst": false, 00:28:06.357 "ddgst": false 00:28:06.357 }, 00:28:06.357 "method": "bdev_nvme_attach_controller" 00:28:06.357 },{ 00:28:06.357 "params": { 00:28:06.357 "name": "Nvme6", 00:28:06.357 "trtype": "tcp", 00:28:06.357 "traddr": "10.0.0.2", 00:28:06.357 "adrfam": "ipv4", 00:28:06.357 "trsvcid": "4420", 00:28:06.357 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:06.357 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:06.357 "hdgst": false, 00:28:06.357 "ddgst": false 00:28:06.357 }, 00:28:06.357 "method": "bdev_nvme_attach_controller" 00:28:06.357 },{ 00:28:06.357 "params": { 00:28:06.357 "name": "Nvme7", 00:28:06.357 "trtype": "tcp", 00:28:06.357 "traddr": "10.0.0.2", 00:28:06.357 "adrfam": "ipv4", 00:28:06.357 "trsvcid": "4420", 00:28:06.357 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:06.357 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:06.357 "hdgst": false, 00:28:06.357 "ddgst": false 00:28:06.357 }, 00:28:06.357 "method": "bdev_nvme_attach_controller" 00:28:06.357 },{ 00:28:06.357 "params": { 00:28:06.357 "name": "Nvme8", 00:28:06.357 "trtype": "tcp", 00:28:06.357 "traddr": "10.0.0.2", 00:28:06.357 "adrfam": "ipv4", 00:28:06.357 "trsvcid": "4420", 00:28:06.357 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:06.357 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:06.357 "hdgst": false, 00:28:06.357 "ddgst": false 00:28:06.357 }, 00:28:06.357 "method": "bdev_nvme_attach_controller" 00:28:06.357 },{ 00:28:06.357 "params": { 00:28:06.357 "name": "Nvme9", 00:28:06.357 "trtype": "tcp", 00:28:06.357 "traddr": "10.0.0.2", 00:28:06.357 "adrfam": "ipv4", 00:28:06.357 "trsvcid": "4420", 00:28:06.357 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:06.357 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:06.357 "hdgst": false, 00:28:06.357 "ddgst": false 00:28:06.357 }, 00:28:06.357 "method": "bdev_nvme_attach_controller" 00:28:06.357 },{ 00:28:06.357 "params": { 00:28:06.357 "name": "Nvme10", 00:28:06.357 "trtype": "tcp", 00:28:06.357 "traddr": "10.0.0.2", 00:28:06.357 "adrfam": "ipv4", 00:28:06.357 "trsvcid": "4420", 00:28:06.357 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:06.357 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:06.357 "hdgst": false, 00:28:06.357 "ddgst": false 00:28:06.357 }, 00:28:06.357 "method": "bdev_nvme_attach_controller" 00:28:06.357 }' 00:28:06.357 [2024-07-14 01:13:55.625790] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:06.357 [2024-07-14 01:13:55.625904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1229013 ] 00:28:06.357 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.357 [2024-07-14 01:13:55.690682] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.616 [2024-07-14 01:13:55.778465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.022 Running I/O for 10 seconds... 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:08.280 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:08.538 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:08.538 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:08.538 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:08.538 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:08.538 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.538 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.808 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.808 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:08.808 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:08.808 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:08.808 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:08.808 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:08.808 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1228902 00:28:08.808 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1228902 ']' 00:28:08.808 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1228902 00:28:08.808 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:28:08.809 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:08.809 01:13:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1228902 00:28:08.809 01:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:08.809 01:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:08.809 01:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1228902' 00:28:08.809 killing process with pid 1228902 00:28:08.809 01:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1228902 00:28:08.809 01:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1228902 00:28:08.809 [2024-07-14 01:13:58.006711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.007996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.008367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4faf0 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.012965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.809 [2024-07-14 01:13:58.013622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.013635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.013647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.013659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.013673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.013685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.013702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.013715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.013728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.013743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.013760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.013777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.013791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.013803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.013815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.013828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.013842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61970 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.014320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.014968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.014982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.015975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.015993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.016009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.016023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.016039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.016054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.016070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.016084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.016099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.016113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.016129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.016142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.016158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.016173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.016188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.016202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.016223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.016237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.016253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.016266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.016282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.016295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.016311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.016325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.016340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.810 [2024-07-14 01:13:58.016354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.810 [2024-07-14 01:13:58.016369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ed0e0 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.016467] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9ed0e0 was disconnected and freed. reset controller. 00:28:08.810 [2024-07-14 01:13:58.016940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.016973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.016988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.810 [2024-07-14 01:13:58.017001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.017772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50430 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.019996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.020008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.020021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.020033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.020045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.020057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.020070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.020086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.020099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.020111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b508f0 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.020873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.020906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.020921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.020935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.020947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.020960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.020989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.021003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.021016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.021028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.021040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.811 [2024-07-14 01:13:58.021052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.021704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50d90 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.022809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.022835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.022849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.022861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.022898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.022913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.022926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.022939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.022952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.022964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.022979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.022992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.023644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60800 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.024923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.024949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.024964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.024977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.024990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025269] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.812 [2024-07-14 01:13:58.025529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.025541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.025554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.025566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.025578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.025590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.025602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.025615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.025627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.025639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.025651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.025663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.025675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.025687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.025699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.025711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.025723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.025739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.025751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a60ca0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.027887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61600 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.036968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9ee0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.037223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe228c0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.037397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20b10 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.037560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe49490 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.037726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe423d0 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.037905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.037985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.037999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.038012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.038025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x918610 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.038070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.038091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.038106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.038120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.038134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.038148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.038163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.038176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.038189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb4c40 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.038237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.038258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.038274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.038292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.038307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.038320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.038335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.813 [2024-07-14 01:13:58.038348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.813 [2024-07-14 01:13:58.038361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe22370 is same with the state(5) to be set 00:28:08.813 [2024-07-14 01:13:58.038406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.814 [2024-07-14 01:13:58.038426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.038441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.814 [2024-07-14 01:13:58.038455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.038469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.814 [2024-07-14 01:13:58.038483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.038497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.814 [2024-07-14 01:13:58.038511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.038524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc8030 is same with the state(5) to be set 00:28:08.814 [2024-07-14 01:13:58.038572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.814 [2024-07-14 01:13:58.038593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.038608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.814 [2024-07-14 01:13:58.038622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.038636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.814 [2024-07-14 01:13:58.038649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.038664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:08.814 [2024-07-14 01:13:58.038677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.038691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc8950 is same with the state(5) to be set 00:28:08.814 [2024-07-14 01:13:58.039937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.039962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.039994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.040977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.040994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.814 [2024-07-14 01:13:58.041784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.814 [2024-07-14 01:13:58.041798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.041814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.041830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.041847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.041861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.041891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.041907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.041924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.041938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.041954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.041968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.041984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.042003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.042019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ee280 is same with the state(5) to be set 00:28:08.815 [2024-07-14 01:13:58.042111] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9ee280 was disconnected and freed. reset controller. 00:28:08.815 [2024-07-14 01:13:58.042593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.042617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.042640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.042656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.042673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.042687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.042703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.042717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.042734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.042748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.042764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.042778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.042794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.042808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.042825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.042839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.042855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.042878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.042896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.042911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.042928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.042942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.042963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.042978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.042995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.043973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.043990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.815 [2024-07-14 01:13:58.044659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.815 [2024-07-14 01:13:58.044747] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdf9080 was disconnected and freed. reset controller. 00:28:08.815 [2024-07-14 01:13:58.044946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.815 [2024-07-14 01:13:58.045002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e9ee0 (9): Bad file descriptor 00:28:08.815 [2024-07-14 01:13:58.047724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:08.815 [2024-07-14 01:13:58.047770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe228c0 (9): Bad file descriptor 00:28:08.815 [2024-07-14 01:13:58.047818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20b10 (9): Bad file descriptor 00:28:08.815 [2024-07-14 01:13:58.047854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe49490 (9): Bad file descriptor 00:28:08.815 [2024-07-14 01:13:58.047910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe423d0 (9): Bad file descriptor 00:28:08.815 [2024-07-14 01:13:58.047947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x918610 (9): Bad file descriptor 00:28:08.816 [2024-07-14 01:13:58.047978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb4c40 (9): Bad file descriptor 00:28:08.816 [2024-07-14 01:13:58.048003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe22370 (9): Bad file descriptor 00:28:08.816 [2024-07-14 01:13:58.048031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc8030 (9): Bad file descriptor 00:28:08.816 [2024-07-14 01:13:58.048063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc8950 (9): Bad file descriptor 00:28:08.816 [2024-07-14 01:13:58.048839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:08.816 [2024-07-14 01:13:58.049172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.816 [2024-07-14 01:13:58.049202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e9ee0 with addr=10.0.0.2, port=4420 00:28:08.816 [2024-07-14 01:13:58.049221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9ee0 is same with the state(5) to be set 00:28:08.816 [2024-07-14 01:13:58.049572] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:08.816 [2024-07-14 01:13:58.049649] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:08.816 [2024-07-14 01:13:58.049720] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:08.816 [2024-07-14 01:13:58.049787] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:08.816 [2024-07-14 01:13:58.049857] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:08.816 [2024-07-14 01:13:58.050200] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:08.816 [2024-07-14 01:13:58.050269] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:08.816 [2024-07-14 01:13:58.050441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.816 [2024-07-14 01:13:58.050469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe228c0 with addr=10.0.0.2, port=4420 00:28:08.816 [2024-07-14 01:13:58.050485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe228c0 is same with the state(5) to be set 00:28:08.816 [2024-07-14 01:13:58.050724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.816 [2024-07-14 01:13:58.050749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb4c40 with addr=10.0.0.2, port=4420 00:28:08.816 [2024-07-14 01:13:58.050765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb4c40 is same with the state(5) to be set 00:28:08.816 [2024-07-14 01:13:58.050784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e9ee0 (9): Bad file descriptor 00:28:08.816 [2024-07-14 01:13:58.050962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe228c0 (9): Bad file descriptor 00:28:08.816 [2024-07-14 01:13:58.050991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb4c40 (9): Bad file descriptor 00:28:08.816 [2024-07-14 01:13:58.051009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.816 [2024-07-14 01:13:58.051023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.816 [2024-07-14 01:13:58.051041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.816 [2024-07-14 01:13:58.051117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.816 [2024-07-14 01:13:58.051139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:08.816 [2024-07-14 01:13:58.051152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:08.816 [2024-07-14 01:13:58.051168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:08.816 [2024-07-14 01:13:58.051187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:08.816 [2024-07-14 01:13:58.051201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:08.816 [2024-07-14 01:13:58.051215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:08.816 [2024-07-14 01:13:58.051270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.816 [2024-07-14 01:13:58.051288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.816 [2024-07-14 01:13:58.057966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.058972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.058989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.816 [2024-07-14 01:13:58.059605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.816 [2024-07-14 01:13:58.059619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.059636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.059650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.059666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.059680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.059696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.059711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.059730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.059746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.059763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.059778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.059795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.059810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.059827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.059841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.059857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.059881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.059907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.059924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.059940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.059956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.059972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.059988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.060004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.060019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.060035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.060050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.060066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf3c50 is same with the state(5) to be set 00:28:08.817 [2024-07-14 01:13:58.061389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.061433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.061470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.061502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.061532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.061566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.061597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.061627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.061658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.061688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.061719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.061750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.061781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.061811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.061842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.061886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.061918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.061948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.061978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.061992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.062977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.062991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.063007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.063021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.063042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.063057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.063073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.063087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.063103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.817 [2024-07-14 01:13:58.063117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.817 [2024-07-14 01:13:58.063135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.063149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.063165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.063179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.063195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.063210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.063226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.063240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.063257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.063271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.063288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.063302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.063317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.063331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.063348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.063362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.063378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.063392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.063407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf3e30 is same with the state(5) to be set 00:28:08.818 [2024-07-14 01:13:58.064652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.064676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.064698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.064714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.064730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.064745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.064762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.064776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.064792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.064807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.064823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.064838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.064854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.064877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.064895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.064910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.064926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.064940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.064956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.064970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.064987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.065971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.065987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.066001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.066017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.066031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.066048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.066062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.066078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.066092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.066108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.066122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.066138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.066152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.066168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.066182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.066198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.066212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.066229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.066249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.066266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.066280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.066296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.066310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.066326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.066340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.066357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.066372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.066388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.066403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.066419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.066432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.066449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.066463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.066479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.066493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.066509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.818 [2024-07-14 01:13:58.066524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.818 [2024-07-14 01:13:58.066540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.066554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.066570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.066584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.066600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.066614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.066634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.066649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.066664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf52a0 is same with the state(5) to be set 00:28:08.819 [2024-07-14 01:13:58.067908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.067933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.067955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.067971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.067988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.068969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.068984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.819 [2024-07-14 01:13:58.069925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.819 [2024-07-14 01:13:58.069939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.069953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf6710 is same with the state(5) to be set 00:28:08.820 [2024-07-14 01:13:58.071203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.071976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.071992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.072975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.072989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.073006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.073019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.073035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.073050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.073066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.073080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.073100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.073116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.073132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.073146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.073163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.073177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.073194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.080945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.081024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.081040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.081056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf7b80 is same with the state(5) to be set 00:28:08.820 [2024-07-14 01:13:58.082412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.082437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.082464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.082480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.082498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.082513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.082530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.082544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.082561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.082576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.082594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.082609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.082625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.082640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.082656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.082683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.082701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.082716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.082732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.820 [2024-07-14 01:13:58.082746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.820 [2024-07-14 01:13:58.082762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.082776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.082792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.082806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.082821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.082836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.082852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.082876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.082914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.082935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.082952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.082967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.082984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.082998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.083974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.083991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.084005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.084021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.084035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.084052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.084066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.084082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.084096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.084113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.084128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.084144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.084158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.084174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.084189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.084205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.084219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.084235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.084249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.084265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.084283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.084300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.084314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.084331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.084346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.084362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.084376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.084392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.084406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.084422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.084436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.084453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.084468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.084482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfa330 is same with the state(5) to be set 00:28:08.821 [2024-07-14 01:13:58.086110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.086136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.086159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.086175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.086192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.086206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.086222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.086236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.086254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.086269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.086285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.086304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.086321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.086336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.086352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.086366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.086382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.086396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.086413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.086427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.086444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.086458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.086474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.086488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.086505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.821 [2024-07-14 01:13:58.086519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.821 [2024-07-14 01:13:58.086535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.086549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.086565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.086579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.086596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.086610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.086627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.086641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.086658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.086672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.086692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.086708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.086724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.086738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.086755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.086770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.086786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.086801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.086817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.086832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.086848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.086862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.086891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.086906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.086922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.086936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.086952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.086966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.086983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.086997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.087982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.087996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.088012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.088026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.088042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.088056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.088072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.088086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.088103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.822 [2024-07-14 01:13:58.088116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:08.822 [2024-07-14 01:13:58.088131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdfb830 is same with the state(5) to be set 00:28:08.822 [2024-07-14 01:13:58.089727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:08.822 [2024-07-14 01:13:58.089761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:08.822 [2024-07-14 01:13:58.089779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:08.822 [2024-07-14 01:13:58.089913] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:08.822 [2024-07-14 01:13:58.089944] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:08.822 [2024-07-14 01:13:58.089967] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:08.822 [2024-07-14 01:13:58.089987] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:08.822 [2024-07-14 01:13:58.090006] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:08.822 [2024-07-14 01:13:58.090111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:08.822 [2024-07-14 01:13:58.090141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:08.822 [2024-07-14 01:13:58.090160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:08.822 task offset: 16384 on job bdev=Nvme1n1 fails 00:28:08.822 00:28:08.822 Latency(us) 00:28:08.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.822 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.822 Job: Nvme1n1 ended in about 0.91 seconds with error 00:28:08.822 Verification LBA range: start 0x0 length 0x400 00:28:08.822 Nvme1n1 : 0.91 140.08 8.75 70.04 0.00 301256.25 60584.39 242337.56 00:28:08.822 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.822 Job: Nvme2n1 ended in about 0.92 seconds with error 00:28:08.822 Verification LBA range: start 0x0 length 0x400 00:28:08.822 Nvme2n1 : 0.92 208.68 13.04 69.56 0.00 222879.29 9611.95 265639.25 00:28:08.822 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.822 Job: Nvme3n1 ended in about 0.94 seconds with error 00:28:08.822 Verification LBA range: start 0x0 length 0x400 00:28:08.822 Nvme3n1 : 0.94 136.88 8.56 68.44 0.00 296121.08 22233.69 256318.58 00:28:08.822 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.822 Job: Nvme4n1 ended in about 0.94 seconds with error 00:28:08.822 Verification LBA range: start 0x0 length 0x400 00:28:08.822 Nvme4n1 : 0.94 204.60 12.79 68.20 0.00 218310.92 19029.71 259425.47 00:28:08.822 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.822 Job: Nvme5n1 ended in about 0.94 seconds with error 00:28:08.822 Verification LBA range: start 0x0 length 0x400 00:28:08.822 Nvme5n1 : 0.94 135.93 8.50 67.97 0.00 286133.41 19515.16 242337.56 00:28:08.822 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.822 Job: Nvme6n1 ended in about 0.94 seconds with error 00:28:08.822 Verification LBA range: start 0x0 length 0x400 00:28:08.822 Nvme6n1 : 0.94 135.46 8.47 67.73 0.00 281321.37 23010.42 281173.71 00:28:08.822 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.822 Job: Nvme7n1 ended in about 0.96 seconds with error 00:28:08.822 Verification LBA range: start 0x0 length 0x400 00:28:08.822 Nvme7n1 : 0.96 200.83 12.55 66.94 0.00 209271.85 18738.44 233016.89 00:28:08.822 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.822 Job: Nvme8n1 ended in about 0.92 seconds with error 00:28:08.822 Verification LBA range: start 0x0 length 0x400 00:28:08.822 Nvme8n1 : 0.92 208.36 13.02 69.45 0.00 196166.16 10048.85 271853.04 00:28:08.822 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.822 Job: Nvme9n1 ended in about 0.96 seconds with error 00:28:08.822 Verification LBA range: start 0x0 length 0x400 00:28:08.822 Nvme9n1 : 0.96 133.41 8.34 66.71 0.00 268415.43 25826.04 262532.36 00:28:08.823 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:08.823 Job: Nvme10n1 ended in about 0.96 seconds with error 00:28:08.823 Verification LBA range: start 0x0 length 0x400 00:28:08.823 Nvme10n1 : 0.96 132.91 8.31 66.45 0.00 263725.76 24563.86 285834.05 00:28:08.823 =================================================================================================================== 00:28:08.823 Total : 1637.14 102.32 681.49 0.00 249336.26 9611.95 285834.05 00:28:08.823 [2024-07-14 01:13:58.117532] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:08.823 [2024-07-14 01:13:58.117628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:08.823 [2024-07-14 01:13:58.117668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:08.823 [2024-07-14 01:13:58.118092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.823 [2024-07-14 01:13:58.118130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe22370 with addr=10.0.0.2, port=4420 00:28:08.823 [2024-07-14 01:13:58.118152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe22370 is same with the state(5) to be set 00:28:08.823 [2024-07-14 01:13:58.118304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.823 [2024-07-14 01:13:58.118329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe20b10 with addr=10.0.0.2, port=4420 00:28:08.823 [2024-07-14 01:13:58.118345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20b10 is same with the state(5) to be set 00:28:08.823 [2024-07-14 01:13:58.118511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.823 [2024-07-14 01:13:58.118546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe49490 with addr=10.0.0.2, port=4420 00:28:08.823 [2024-07-14 01:13:58.118562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe49490 is same with the state(5) to be set 00:28:08.823 [2024-07-14 01:13:58.120756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.823 [2024-07-14 01:13:58.120787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe423d0 with addr=10.0.0.2, port=4420 00:28:08.823 [2024-07-14 01:13:58.120803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe423d0 is same with the state(5) to be set 00:28:08.823 [2024-07-14 01:13:58.120958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.823 [2024-07-14 01:13:58.120984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x918610 with addr=10.0.0.2, port=4420 00:28:08.823 [2024-07-14 01:13:58.121001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x918610 is same with the state(5) to be set 00:28:08.823 [2024-07-14 01:13:58.121173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.823 [2024-07-14 01:13:58.121197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc8950 with addr=10.0.0.2, port=4420 00:28:08.823 [2024-07-14 01:13:58.121213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc8950 is same with the state(5) to be set 00:28:08.823 [2024-07-14 01:13:58.121394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.823 [2024-07-14 01:13:58.121419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc8030 with addr=10.0.0.2, port=4420 00:28:08.823 [2024-07-14 01:13:58.121435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc8030 is same with the state(5) to be set 00:28:08.823 [2024-07-14 01:13:58.121587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.823 [2024-07-14 01:13:58.121613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e9ee0 with addr=10.0.0.2, port=4420 00:28:08.823 [2024-07-14 01:13:58.121629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9ee0 is same with the state(5) to be set 00:28:08.823 [2024-07-14 01:13:58.121663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe22370 (9): Bad file descriptor 00:28:08.823 [2024-07-14 01:13:58.121688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe20b10 (9): Bad file descriptor 00:28:08.823 [2024-07-14 01:13:58.121707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe49490 (9): Bad file descriptor 00:28:08.823 [2024-07-14 01:13:58.121762] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:08.823 [2024-07-14 01:13:58.121791] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:08.823 [2024-07-14 01:13:58.121816] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:08.823 [2024-07-14 01:13:58.121843] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:08.823 [2024-07-14 01:13:58.121864] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:08.823 [2024-07-14 01:13:58.121960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:08.823 [2024-07-14 01:13:58.121985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:08.823 [2024-07-14 01:13:58.122053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe423d0 (9): Bad file descriptor 00:28:08.823 [2024-07-14 01:13:58.122078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x918610 (9): Bad file descriptor 00:28:08.823 [2024-07-14 01:13:58.122096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc8950 (9): Bad file descriptor 00:28:08.823 [2024-07-14 01:13:58.122115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc8030 (9): Bad file descriptor 00:28:08.823 [2024-07-14 01:13:58.122133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e9ee0 (9): Bad file descriptor 00:28:08.823 [2024-07-14 01:13:58.122150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:08.823 [2024-07-14 01:13:58.122164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:08.823 [2024-07-14 01:13:58.122182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:08.823 [2024-07-14 01:13:58.122201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:08.823 [2024-07-14 01:13:58.122216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:08.823 [2024-07-14 01:13:58.122229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:08.823 [2024-07-14 01:13:58.122245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:08.823 [2024-07-14 01:13:58.122259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:08.823 [2024-07-14 01:13:58.122272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:08.823 [2024-07-14 01:13:58.122376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.823 [2024-07-14 01:13:58.122399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.823 [2024-07-14 01:13:58.122412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.823 [2024-07-14 01:13:58.122573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.823 [2024-07-14 01:13:58.122599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb4c40 with addr=10.0.0.2, port=4420 00:28:08.823 [2024-07-14 01:13:58.122615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb4c40 is same with the state(5) to be set 00:28:08.823 [2024-07-14 01:13:58.122757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.823 [2024-07-14 01:13:58.122782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe228c0 with addr=10.0.0.2, port=4420 00:28:08.823 [2024-07-14 01:13:58.122798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe228c0 is same with the state(5) to be set 00:28:08.823 [2024-07-14 01:13:58.122812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:08.823 [2024-07-14 01:13:58.122825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:08.823 [2024-07-14 01:13:58.122838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:08.823 [2024-07-14 01:13:58.122861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:08.823 [2024-07-14 01:13:58.122890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:08.823 [2024-07-14 01:13:58.122910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:08.823 [2024-07-14 01:13:58.122929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:08.823 [2024-07-14 01:13:58.122943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:08.823 [2024-07-14 01:13:58.122956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:08.823 [2024-07-14 01:13:58.122971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:08.823 [2024-07-14 01:13:58.122985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:08.823 [2024-07-14 01:13:58.122998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:08.823 [2024-07-14 01:13:58.123014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:08.823 [2024-07-14 01:13:58.123028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:08.823 [2024-07-14 01:13:58.123041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:08.823 [2024-07-14 01:13:58.123080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.823 [2024-07-14 01:13:58.123100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.823 [2024-07-14 01:13:58.123112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.823 [2024-07-14 01:13:58.123124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.823 [2024-07-14 01:13:58.123135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.823 [2024-07-14 01:13:58.123152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb4c40 (9): Bad file descriptor 00:28:08.823 [2024-07-14 01:13:58.123171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe228c0 (9): Bad file descriptor 00:28:08.823 [2024-07-14 01:13:58.123211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:08.823 [2024-07-14 01:13:58.123230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:08.823 [2024-07-14 01:13:58.123244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:08.823 [2024-07-14 01:13:58.123261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:08.823 [2024-07-14 01:13:58.123275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:08.823 [2024-07-14 01:13:58.123288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:08.823 [2024-07-14 01:13:58.123328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:08.823 [2024-07-14 01:13:58.123346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:09.389 01:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:09.389 01:13:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1229013 00:28:10.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1229013) - No such process 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:10.325 rmmod nvme_tcp 00:28:10.325 rmmod nvme_fabrics 00:28:10.325 rmmod nvme_keyring 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:10.325 01:13:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.860 01:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:12.861 00:28:12.861 real 0m7.193s 00:28:12.861 user 0m16.678s 00:28:12.861 sys 0m1.549s 00:28:12.861 01:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:12.861 01:14:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.861 ************************************ 00:28:12.861 END TEST nvmf_shutdown_tc3 00:28:12.861 ************************************ 00:28:12.861 01:14:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:12.861 01:14:01 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:12.861 00:28:12.861 real 0m26.962s 00:28:12.861 user 1m13.984s 00:28:12.861 sys 0m6.524s 00:28:12.861 01:14:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:12.861 01:14:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:12.861 ************************************ 00:28:12.861 END TEST nvmf_shutdown 00:28:12.861 ************************************ 00:28:12.861 01:14:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:12.861 01:14:01 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:28:12.861 01:14:01 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:12.861 01:14:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:12.861 01:14:01 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:28:12.861 01:14:01 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:12.861 01:14:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:12.861 01:14:01 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:28:12.861 01:14:01 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:12.861 01:14:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:12.861 01:14:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:12.861 01:14:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:12.861 ************************************ 00:28:12.861 START TEST nvmf_multicontroller 00:28:12.861 ************************************ 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:12.861 * Looking for test storage... 00:28:12.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:12.861 01:14:01 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:14.760 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:14.760 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:14.760 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:14.760 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:14.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:28:14.760 00:28:14.760 --- 10.0.0.2 ping statistics --- 00:28:14.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.760 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:28:14.760 00:28:14.760 --- 10.0.0.1 ping statistics --- 00:28:14.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.760 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1231472 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1231472 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1231472 ']' 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:14.760 01:14:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.760 [2024-07-14 01:14:03.915379] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:14.760 [2024-07-14 01:14:03.915462] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.760 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.760 [2024-07-14 01:14:03.988078] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:14.760 [2024-07-14 01:14:04.078922] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.760 [2024-07-14 01:14:04.078988] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.760 [2024-07-14 01:14:04.079012] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.760 [2024-07-14 01:14:04.079026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.760 [2024-07-14 01:14:04.079038] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.760 [2024-07-14 01:14:04.079139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:14.760 [2024-07-14 01:14:04.079237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:14.761 [2024-07-14 01:14:04.079239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.018 [2024-07-14 01:14:04.219277] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.018 Malloc0 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.018 [2024-07-14 01:14:04.280512] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.018 [2024-07-14 01:14:04.288402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.018 Malloc1 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1231493 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1231493 /var/tmp/bdevperf.sock 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1231493 ']' 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:15.018 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:15.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:15.019 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:15.019 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.275 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:15.275 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:15.275 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:15.275 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.275 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.533 NVMe0n1 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.533 1 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.533 request: 00:28:15.533 { 00:28:15.533 "name": "NVMe0", 00:28:15.533 "trtype": "tcp", 00:28:15.533 "traddr": "10.0.0.2", 00:28:15.533 "adrfam": "ipv4", 00:28:15.533 "trsvcid": "4420", 00:28:15.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:15.533 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:15.533 "hostaddr": "10.0.0.2", 00:28:15.533 "hostsvcid": "60000", 00:28:15.533 "prchk_reftag": false, 00:28:15.533 "prchk_guard": false, 00:28:15.533 "hdgst": false, 00:28:15.533 "ddgst": false, 00:28:15.533 "method": "bdev_nvme_attach_controller", 00:28:15.533 "req_id": 1 00:28:15.533 } 00:28:15.533 Got JSON-RPC error response 00:28:15.533 response: 00:28:15.533 { 00:28:15.533 "code": -114, 00:28:15.533 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:15.533 } 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.533 request: 00:28:15.533 { 00:28:15.533 "name": "NVMe0", 00:28:15.533 "trtype": "tcp", 00:28:15.533 "traddr": "10.0.0.2", 00:28:15.533 "adrfam": "ipv4", 00:28:15.533 "trsvcid": "4420", 00:28:15.533 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:15.533 "hostaddr": "10.0.0.2", 00:28:15.533 "hostsvcid": "60000", 00:28:15.533 "prchk_reftag": false, 00:28:15.533 "prchk_guard": false, 00:28:15.533 "hdgst": false, 00:28:15.533 "ddgst": false, 00:28:15.533 "method": "bdev_nvme_attach_controller", 00:28:15.533 "req_id": 1 00:28:15.533 } 00:28:15.533 Got JSON-RPC error response 00:28:15.533 response: 00:28:15.533 { 00:28:15.533 "code": -114, 00:28:15.533 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:15.533 } 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.533 request: 00:28:15.533 { 00:28:15.533 "name": "NVMe0", 00:28:15.533 "trtype": "tcp", 00:28:15.533 "traddr": "10.0.0.2", 00:28:15.533 "adrfam": "ipv4", 00:28:15.533 "trsvcid": "4420", 00:28:15.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:15.533 "hostaddr": "10.0.0.2", 00:28:15.533 "hostsvcid": "60000", 00:28:15.533 "prchk_reftag": false, 00:28:15.533 "prchk_guard": false, 00:28:15.533 "hdgst": false, 00:28:15.533 "ddgst": false, 00:28:15.533 "multipath": "disable", 00:28:15.533 "method": "bdev_nvme_attach_controller", 00:28:15.533 "req_id": 1 00:28:15.533 } 00:28:15.533 Got JSON-RPC error response 00:28:15.533 response: 00:28:15.533 { 00:28:15.533 "code": -114, 00:28:15.533 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:15.533 } 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.533 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.533 request: 00:28:15.533 { 00:28:15.533 "name": "NVMe0", 00:28:15.533 "trtype": "tcp", 00:28:15.533 "traddr": "10.0.0.2", 00:28:15.533 "adrfam": "ipv4", 00:28:15.533 "trsvcid": "4420", 00:28:15.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:15.533 "hostaddr": "10.0.0.2", 00:28:15.533 "hostsvcid": "60000", 00:28:15.533 "prchk_reftag": false, 00:28:15.533 "prchk_guard": false, 00:28:15.533 "hdgst": false, 00:28:15.533 "ddgst": false, 00:28:15.533 "multipath": "failover", 00:28:15.533 "method": "bdev_nvme_attach_controller", 00:28:15.533 "req_id": 1 00:28:15.533 } 00:28:15.534 Got JSON-RPC error response 00:28:15.534 response: 00:28:15.534 { 00:28:15.534 "code": -114, 00:28:15.534 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:15.534 } 00:28:15.534 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:15.534 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:15.534 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:15.534 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:15.534 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:15.534 01:14:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:15.534 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.534 01:14:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.791 00:28:15.791 01:14:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.791 01:14:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:15.791 01:14:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.791 01:14:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.791 01:14:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.791 01:14:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:15.791 01:14:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.791 01:14:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.791 00:28:15.791 01:14:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.791 01:14:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:15.791 01:14:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:15.791 01:14:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.791 01:14:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:15.791 01:14:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.791 01:14:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:15.791 01:14:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:17.171 0 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1231493 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1231493 ']' 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1231493 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1231493 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1231493' 00:28:17.171 killing process with pid 1231493 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1231493 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1231493 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:28:17.171 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:28:17.171 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:17.171 [2024-07-14 01:14:04.393691] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:17.171 [2024-07-14 01:14:04.393774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231493 ] 00:28:17.171 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.171 [2024-07-14 01:14:04.455314] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.171 [2024-07-14 01:14:04.541596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.171 [2024-07-14 01:14:05.080611] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 35aacfed-1aa4-40b7-85c7-300704d5e41c already exists 00:28:17.171 [2024-07-14 01:14:05.080648] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:35aacfed-1aa4-40b7-85c7-300704d5e41c alias for bdev NVMe1n1 00:28:17.171 [2024-07-14 01:14:05.080678] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:17.171 Running I/O for 1 seconds... 00:28:17.171 00:28:17.171 Latency(us) 00:28:17.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.171 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:17.171 NVMe0n1 : 1.01 16610.80 64.89 0.00 0.00 7672.93 3786.52 11456.66 00:28:17.171 =================================================================================================================== 00:28:17.171 Total : 16610.80 64.89 0.00 0.00 7672.93 3786.52 11456.66 00:28:17.171 Received shutdown signal, test time was about 1.000000 seconds 00:28:17.171 00:28:17.171 Latency(us) 00:28:17.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.171 =================================================================================================================== 00:28:17.172 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:17.172 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:17.172 rmmod nvme_tcp 00:28:17.172 rmmod nvme_fabrics 00:28:17.172 rmmod nvme_keyring 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1231472 ']' 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1231472 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1231472 ']' 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1231472 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1231472 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1231472' 00:28:17.172 killing process with pid 1231472 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1231472 00:28:17.172 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1231472 00:28:17.431 01:14:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:17.431 01:14:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:17.431 01:14:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:17.431 01:14:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:17.431 01:14:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:17.431 01:14:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.431 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:17.431 01:14:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.972 01:14:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:19.972 00:28:19.972 real 0m7.115s 00:28:19.972 user 0m11.019s 00:28:19.972 sys 0m2.232s 00:28:19.972 01:14:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:19.972 01:14:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:19.972 ************************************ 00:28:19.972 END TEST nvmf_multicontroller 00:28:19.972 ************************************ 00:28:19.972 01:14:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:19.972 01:14:08 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:19.972 01:14:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:19.972 01:14:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:19.972 01:14:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.972 ************************************ 00:28:19.972 START TEST nvmf_aer 00:28:19.972 ************************************ 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:19.972 * Looking for test storage... 00:28:19.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:19.972 01:14:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:19.972 01:14:09 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:19.972 01:14:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:19.972 01:14:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.972 01:14:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:19.972 01:14:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:19.972 01:14:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:19.972 01:14:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.972 01:14:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:19.972 01:14:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.972 01:14:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:19.972 01:14:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:19.972 01:14:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:19.972 01:14:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:21.875 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:21.875 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.875 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:21.876 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:21.876 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:21.876 01:14:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:21.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:28:21.876 00:28:21.876 --- 10.0.0.2 ping statistics --- 00:28:21.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.876 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:28:21.876 00:28:21.876 --- 10.0.0.1 ping statistics --- 00:28:21.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.876 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1233700 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1233700 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1233700 ']' 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:21.876 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:21.876 [2024-07-14 01:14:11.092182] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:21.876 [2024-07-14 01:14:11.092271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.876 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.876 [2024-07-14 01:14:11.158030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:21.876 [2024-07-14 01:14:11.243586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.876 [2024-07-14 01:14:11.243654] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.876 [2024-07-14 01:14:11.243667] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.876 [2024-07-14 01:14:11.243678] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.876 [2024-07-14 01:14:11.243687] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.876 [2024-07-14 01:14:11.243843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.876 [2024-07-14 01:14:11.243932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.876 [2024-07-14 01:14:11.243934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.876 [2024-07-14 01:14:11.243905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.134 [2024-07-14 01:14:11.393761] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.134 Malloc0 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.134 [2024-07-14 01:14:11.446542] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.134 [ 00:28:22.134 { 00:28:22.134 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:22.134 "subtype": "Discovery", 00:28:22.134 "listen_addresses": [], 00:28:22.134 "allow_any_host": true, 00:28:22.134 "hosts": [] 00:28:22.134 }, 00:28:22.134 { 00:28:22.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:22.134 "subtype": "NVMe", 00:28:22.134 "listen_addresses": [ 00:28:22.134 { 00:28:22.134 "trtype": "TCP", 00:28:22.134 "adrfam": "IPv4", 00:28:22.134 "traddr": "10.0.0.2", 00:28:22.134 "trsvcid": "4420" 00:28:22.134 } 00:28:22.134 ], 00:28:22.134 "allow_any_host": true, 00:28:22.134 "hosts": [], 00:28:22.134 "serial_number": "SPDK00000000000001", 00:28:22.134 "model_number": "SPDK bdev Controller", 00:28:22.134 "max_namespaces": 2, 00:28:22.134 "min_cntlid": 1, 00:28:22.134 "max_cntlid": 65519, 00:28:22.134 "namespaces": [ 00:28:22.134 { 00:28:22.134 "nsid": 1, 00:28:22.134 "bdev_name": "Malloc0", 00:28:22.134 "name": "Malloc0", 00:28:22.134 "nguid": "5E786CEF222945C1BB32D16BDB4CDCFC", 00:28:22.134 "uuid": "5e786cef-2229-45c1-bb32-d16bdb4cdcfc" 00:28:22.134 } 00:28:22.134 ] 00:28:22.134 } 00:28:22.134 ] 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1233732 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:28:22.134 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:22.134 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.408 Malloc1 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.408 [ 00:28:22.408 { 00:28:22.408 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:22.408 "subtype": "Discovery", 00:28:22.408 "listen_addresses": [], 00:28:22.408 "allow_any_host": true, 00:28:22.408 "hosts": [] 00:28:22.408 }, 00:28:22.408 { 00:28:22.408 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:22.408 "subtype": "NVMe", 00:28:22.408 "listen_addresses": [ 00:28:22.408 { 00:28:22.408 "trtype": "TCP", 00:28:22.408 "adrfam": "IPv4", 00:28:22.408 "traddr": "10.0.0.2", 00:28:22.408 "trsvcid": "4420" 00:28:22.408 } 00:28:22.408 ], 00:28:22.408 "allow_any_host": true, 00:28:22.408 "hosts": [], 00:28:22.408 "serial_number": "SPDK00000000000001", 00:28:22.408 "model_number": "SPDK bdev Controller", 00:28:22.408 "max_namespaces": 2, 00:28:22.408 "min_cntlid": 1, 00:28:22.408 "max_cntlid": 65519, 00:28:22.408 "namespaces": [ 00:28:22.408 { 00:28:22.408 "nsid": 1, 00:28:22.408 "bdev_name": "Malloc0", 00:28:22.408 "name": "Malloc0", 00:28:22.408 "nguid": "5E786CEF222945C1BB32D16BDB4CDCFC", 00:28:22.408 "uuid": "5e786cef-2229-45c1-bb32-d16bdb4cdcfc" 00:28:22.408 }, 00:28:22.408 { 00:28:22.408 "nsid": 2, 00:28:22.408 "bdev_name": "Malloc1", 00:28:22.408 "name": "Malloc1", 00:28:22.408 "nguid": "782CC5DDDB7B451F9F6F486B1A1382C7", 00:28:22.408 "uuid": "782cc5dd-db7b-451f-9f6f-486b1a1382c7" 00:28:22.408 } 00:28:22.408 ] 00:28:22.408 } 00:28:22.408 ] 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1233732 00:28:22.408 Asynchronous Event Request test 00:28:22.408 Attaching to 10.0.0.2 00:28:22.408 Attached to 10.0.0.2 00:28:22.408 Registering asynchronous event callbacks... 00:28:22.408 Starting namespace attribute notice tests for all controllers... 00:28:22.408 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:22.408 aer_cb - Changed Namespace 00:28:22.408 Cleaning up... 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.408 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.686 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.686 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:22.686 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.686 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:22.686 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.686 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:22.686 01:14:11 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:22.686 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:22.686 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:22.686 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:22.686 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:22.686 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:22.686 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:22.686 rmmod nvme_tcp 00:28:22.686 rmmod nvme_fabrics 00:28:22.686 rmmod nvme_keyring 00:28:22.686 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:22.686 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:22.686 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:22.686 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1233700 ']' 00:28:22.687 01:14:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1233700 00:28:22.687 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1233700 ']' 00:28:22.687 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1233700 00:28:22.687 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:28:22.687 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:22.687 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1233700 00:28:22.687 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:22.687 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:22.687 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1233700' 00:28:22.687 killing process with pid 1233700 00:28:22.687 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1233700 00:28:22.687 01:14:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1233700 00:28:22.948 01:14:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:22.948 01:14:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:22.948 01:14:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:22.948 01:14:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:22.948 01:14:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:22.948 01:14:12 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.948 01:14:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:22.948 01:14:12 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.851 01:14:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:24.851 00:28:24.851 real 0m5.264s 00:28:24.851 user 0m4.198s 00:28:24.851 sys 0m1.834s 00:28:24.851 01:14:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:24.851 01:14:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:24.851 ************************************ 00:28:24.851 END TEST nvmf_aer 00:28:24.851 ************************************ 00:28:24.851 01:14:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:24.851 01:14:14 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:24.851 01:14:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:24.851 01:14:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:24.851 01:14:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:24.851 ************************************ 00:28:24.851 START TEST nvmf_async_init 00:28:24.851 ************************************ 00:28:24.851 01:14:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:25.110 * Looking for test storage... 00:28:25.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:25.110 01:14:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.110 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:25.110 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.110 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d977c56f8295467e8c1b40d7e59ee723 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:25.111 01:14:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.019 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:27.019 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:27.019 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:27.019 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:27.019 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:27.019 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:27.019 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:27.019 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:27.019 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:27.019 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:27.019 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:27.019 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:27.019 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:27.019 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:27.019 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:27.019 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:27.019 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:27.020 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:27.020 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:27.020 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:27.020 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:27.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:28:27.020 00:28:27.020 --- 10.0.0.2 ping statistics --- 00:28:27.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.020 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:27.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:28:27.020 00:28:27.020 --- 10.0.0.1 ping statistics --- 00:28:27.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.020 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1235780 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1235780 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1235780 ']' 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:27.020 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.280 [2024-07-14 01:14:16.476830] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:27.280 [2024-07-14 01:14:16.476920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.280 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.280 [2024-07-14 01:14:16.539785] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.280 [2024-07-14 01:14:16.626860] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.280 [2024-07-14 01:14:16.626921] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.280 [2024-07-14 01:14:16.626949] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.280 [2024-07-14 01:14:16.626961] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.280 [2024-07-14 01:14:16.626971] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.280 [2024-07-14 01:14:16.626998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.541 [2024-07-14 01:14:16.770018] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.541 null0 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d977c56f8295467e8c1b40d7e59ee723 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.541 [2024-07-14 01:14:16.810255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.541 01:14:16 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.801 nvme0n1 00:28:27.801 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.801 01:14:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:27.801 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.801 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.801 [ 00:28:27.801 { 00:28:27.801 "name": "nvme0n1", 00:28:27.801 "aliases": [ 00:28:27.801 "d977c56f-8295-467e-8c1b-40d7e59ee723" 00:28:27.801 ], 00:28:27.801 "product_name": "NVMe disk", 00:28:27.801 "block_size": 512, 00:28:27.801 "num_blocks": 2097152, 00:28:27.801 "uuid": "d977c56f-8295-467e-8c1b-40d7e59ee723", 00:28:27.801 "assigned_rate_limits": { 00:28:27.801 "rw_ios_per_sec": 0, 00:28:27.801 "rw_mbytes_per_sec": 0, 00:28:27.801 "r_mbytes_per_sec": 0, 00:28:27.801 "w_mbytes_per_sec": 0 00:28:27.801 }, 00:28:27.801 "claimed": false, 00:28:27.801 "zoned": false, 00:28:27.801 "supported_io_types": { 00:28:27.801 "read": true, 00:28:27.801 "write": true, 00:28:27.801 "unmap": false, 00:28:27.801 "flush": true, 00:28:27.801 "reset": true, 00:28:27.801 "nvme_admin": true, 00:28:27.801 "nvme_io": true, 00:28:27.801 "nvme_io_md": false, 00:28:27.801 "write_zeroes": true, 00:28:27.801 "zcopy": false, 00:28:27.801 "get_zone_info": false, 00:28:27.801 "zone_management": false, 00:28:27.801 "zone_append": false, 00:28:27.801 "compare": true, 00:28:27.801 "compare_and_write": true, 00:28:27.801 "abort": true, 00:28:27.801 "seek_hole": false, 00:28:27.801 "seek_data": false, 00:28:27.801 "copy": true, 00:28:27.801 "nvme_iov_md": false 00:28:27.801 }, 00:28:27.801 "memory_domains": [ 00:28:27.801 { 00:28:27.801 "dma_device_id": "system", 00:28:27.801 "dma_device_type": 1 00:28:27.801 } 00:28:27.801 ], 00:28:27.801 "driver_specific": { 00:28:27.801 "nvme": [ 00:28:27.801 { 00:28:27.801 "trid": { 00:28:27.801 "trtype": "TCP", 00:28:27.801 "adrfam": "IPv4", 00:28:27.801 "traddr": "10.0.0.2", 00:28:27.801 "trsvcid": "4420", 00:28:27.801 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:27.801 }, 00:28:27.801 "ctrlr_data": { 00:28:27.801 "cntlid": 1, 00:28:27.801 "vendor_id": "0x8086", 00:28:27.801 "model_number": "SPDK bdev Controller", 00:28:27.801 "serial_number": "00000000000000000000", 00:28:27.801 "firmware_revision": "24.09", 00:28:27.801 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:27.801 "oacs": { 00:28:27.801 "security": 0, 00:28:27.801 "format": 0, 00:28:27.801 "firmware": 0, 00:28:27.801 "ns_manage": 0 00:28:27.801 }, 00:28:27.801 "multi_ctrlr": true, 00:28:27.801 "ana_reporting": false 00:28:27.801 }, 00:28:27.801 "vs": { 00:28:27.801 "nvme_version": "1.3" 00:28:27.801 }, 00:28:27.801 "ns_data": { 00:28:27.801 "id": 1, 00:28:27.801 "can_share": true 00:28:27.801 } 00:28:27.801 } 00:28:27.801 ], 00:28:27.801 "mp_policy": "active_passive" 00:28:27.801 } 00:28:27.801 } 00:28:27.801 ] 00:28:27.801 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.801 01:14:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:27.801 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.801 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.801 [2024-07-14 01:14:17.063395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:27.801 [2024-07-14 01:14:17.063501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6cc40 (9): Bad file descriptor 00:28:27.801 [2024-07-14 01:14:17.196006] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:27.801 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.801 01:14:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:27.801 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.801 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:27.801 [ 00:28:27.801 { 00:28:27.801 "name": "nvme0n1", 00:28:27.801 "aliases": [ 00:28:27.801 "d977c56f-8295-467e-8c1b-40d7e59ee723" 00:28:27.801 ], 00:28:27.801 "product_name": "NVMe disk", 00:28:27.801 "block_size": 512, 00:28:27.801 "num_blocks": 2097152, 00:28:27.801 "uuid": "d977c56f-8295-467e-8c1b-40d7e59ee723", 00:28:27.801 "assigned_rate_limits": { 00:28:27.801 "rw_ios_per_sec": 0, 00:28:27.801 "rw_mbytes_per_sec": 0, 00:28:27.801 "r_mbytes_per_sec": 0, 00:28:27.801 "w_mbytes_per_sec": 0 00:28:27.801 }, 00:28:27.801 "claimed": false, 00:28:27.801 "zoned": false, 00:28:27.801 "supported_io_types": { 00:28:27.801 "read": true, 00:28:27.801 "write": true, 00:28:27.801 "unmap": false, 00:28:27.801 "flush": true, 00:28:27.801 "reset": true, 00:28:27.802 "nvme_admin": true, 00:28:27.802 "nvme_io": true, 00:28:27.802 "nvme_io_md": false, 00:28:27.802 "write_zeroes": true, 00:28:27.802 "zcopy": false, 00:28:27.802 "get_zone_info": false, 00:28:27.802 "zone_management": false, 00:28:27.802 "zone_append": false, 00:28:27.802 "compare": true, 00:28:27.802 "compare_and_write": true, 00:28:27.802 "abort": true, 00:28:27.802 "seek_hole": false, 00:28:27.802 "seek_data": false, 00:28:27.802 "copy": true, 00:28:27.802 "nvme_iov_md": false 00:28:27.802 }, 00:28:27.802 "memory_domains": [ 00:28:27.802 { 00:28:27.802 "dma_device_id": "system", 00:28:27.802 "dma_device_type": 1 00:28:27.802 } 00:28:27.802 ], 00:28:27.802 "driver_specific": { 00:28:27.802 "nvme": [ 00:28:27.802 { 00:28:27.802 "trid": { 00:28:27.802 "trtype": "TCP", 00:28:27.802 "adrfam": "IPv4", 00:28:27.802 "traddr": "10.0.0.2", 00:28:27.802 "trsvcid": "4420", 00:28:27.802 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:27.802 }, 00:28:27.802 "ctrlr_data": { 00:28:27.802 "cntlid": 2, 00:28:27.802 "vendor_id": "0x8086", 00:28:27.802 "model_number": "SPDK bdev Controller", 00:28:27.802 "serial_number": "00000000000000000000", 00:28:27.802 "firmware_revision": "24.09", 00:28:27.802 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:27.802 "oacs": { 00:28:27.802 "security": 0, 00:28:27.802 "format": 0, 00:28:27.802 "firmware": 0, 00:28:27.802 "ns_manage": 0 00:28:27.802 }, 00:28:27.802 "multi_ctrlr": true, 00:28:27.802 "ana_reporting": false 00:28:27.802 }, 00:28:27.802 "vs": { 00:28:27.802 "nvme_version": "1.3" 00:28:27.802 }, 00:28:27.802 "ns_data": { 00:28:27.802 "id": 1, 00:28:27.802 "can_share": true 00:28:27.802 } 00:28:27.802 } 00:28:27.802 ], 00:28:27.802 "mp_policy": "active_passive" 00:28:27.802 } 00:28:27.802 } 00:28:27.802 ] 00:28:27.802 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.802 01:14:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.prMc36iddM 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.prMc36iddM 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:28.062 [2024-07-14 01:14:17.248029] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:28.062 [2024-07-14 01:14:17.248234] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.prMc36iddM 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:28.062 [2024-07-14 01:14:17.256039] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.prMc36iddM 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:28.062 [2024-07-14 01:14:17.264068] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:28.062 [2024-07-14 01:14:17.264133] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:28.062 nvme0n1 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.062 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:28.062 [ 00:28:28.062 { 00:28:28.062 "name": "nvme0n1", 00:28:28.062 "aliases": [ 00:28:28.062 "d977c56f-8295-467e-8c1b-40d7e59ee723" 00:28:28.062 ], 00:28:28.062 "product_name": "NVMe disk", 00:28:28.062 "block_size": 512, 00:28:28.062 "num_blocks": 2097152, 00:28:28.063 "uuid": "d977c56f-8295-467e-8c1b-40d7e59ee723", 00:28:28.063 "assigned_rate_limits": { 00:28:28.063 "rw_ios_per_sec": 0, 00:28:28.063 "rw_mbytes_per_sec": 0, 00:28:28.063 "r_mbytes_per_sec": 0, 00:28:28.063 "w_mbytes_per_sec": 0 00:28:28.063 }, 00:28:28.063 "claimed": false, 00:28:28.063 "zoned": false, 00:28:28.063 "supported_io_types": { 00:28:28.063 "read": true, 00:28:28.063 "write": true, 00:28:28.063 "unmap": false, 00:28:28.063 "flush": true, 00:28:28.063 "reset": true, 00:28:28.063 "nvme_admin": true, 00:28:28.063 "nvme_io": true, 00:28:28.063 "nvme_io_md": false, 00:28:28.063 "write_zeroes": true, 00:28:28.063 "zcopy": false, 00:28:28.063 "get_zone_info": false, 00:28:28.063 "zone_management": false, 00:28:28.063 "zone_append": false, 00:28:28.063 "compare": true, 00:28:28.063 "compare_and_write": true, 00:28:28.063 "abort": true, 00:28:28.063 "seek_hole": false, 00:28:28.063 "seek_data": false, 00:28:28.063 "copy": true, 00:28:28.063 "nvme_iov_md": false 00:28:28.063 }, 00:28:28.063 "memory_domains": [ 00:28:28.063 { 00:28:28.063 "dma_device_id": "system", 00:28:28.063 "dma_device_type": 1 00:28:28.063 } 00:28:28.063 ], 00:28:28.063 "driver_specific": { 00:28:28.063 "nvme": [ 00:28:28.063 { 00:28:28.063 "trid": { 00:28:28.063 "trtype": "TCP", 00:28:28.063 "adrfam": "IPv4", 00:28:28.063 "traddr": "10.0.0.2", 00:28:28.063 "trsvcid": "4421", 00:28:28.063 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:28.063 }, 00:28:28.063 "ctrlr_data": { 00:28:28.063 "cntlid": 3, 00:28:28.063 "vendor_id": "0x8086", 00:28:28.063 "model_number": "SPDK bdev Controller", 00:28:28.063 "serial_number": "00000000000000000000", 00:28:28.063 "firmware_revision": "24.09", 00:28:28.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:28.063 "oacs": { 00:28:28.063 "security": 0, 00:28:28.063 "format": 0, 00:28:28.063 "firmware": 0, 00:28:28.063 "ns_manage": 0 00:28:28.063 }, 00:28:28.063 "multi_ctrlr": true, 00:28:28.063 "ana_reporting": false 00:28:28.063 }, 00:28:28.063 "vs": { 00:28:28.063 "nvme_version": "1.3" 00:28:28.063 }, 00:28:28.063 "ns_data": { 00:28:28.063 "id": 1, 00:28:28.063 "can_share": true 00:28:28.063 } 00:28:28.063 } 00:28:28.063 ], 00:28:28.063 "mp_policy": "active_passive" 00:28:28.063 } 00:28:28.063 } 00:28:28.063 ] 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.prMc36iddM 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:28.063 rmmod nvme_tcp 00:28:28.063 rmmod nvme_fabrics 00:28:28.063 rmmod nvme_keyring 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1235780 ']' 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1235780 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1235780 ']' 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1235780 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1235780 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1235780' 00:28:28.063 killing process with pid 1235780 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1235780 00:28:28.063 [2024-07-14 01:14:17.451597] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:28.063 [2024-07-14 01:14:17.451637] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:28.063 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1235780 00:28:28.320 01:14:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:28.320 01:14:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:28.320 01:14:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:28.320 01:14:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:28.320 01:14:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:28.320 01:14:17 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.320 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:28.320 01:14:17 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.850 01:14:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:30.850 00:28:30.850 real 0m5.472s 00:28:30.850 user 0m2.078s 00:28:30.850 sys 0m1.771s 00:28:30.850 01:14:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:30.850 01:14:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:30.850 ************************************ 00:28:30.850 END TEST nvmf_async_init 00:28:30.850 ************************************ 00:28:30.850 01:14:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:30.850 01:14:19 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:30.850 01:14:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:30.850 01:14:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:30.850 01:14:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:30.850 ************************************ 00:28:30.850 START TEST dma 00:28:30.850 ************************************ 00:28:30.850 01:14:19 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:30.850 * Looking for test storage... 00:28:30.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:30.850 01:14:19 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.850 01:14:19 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.850 01:14:19 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.850 01:14:19 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.850 01:14:19 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.850 01:14:19 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.850 01:14:19 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.850 01:14:19 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:30.850 01:14:19 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:30.850 01:14:19 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:30.850 01:14:19 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:30.850 01:14:19 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:30.850 00:28:30.850 real 0m0.069s 00:28:30.850 user 0m0.034s 00:28:30.850 sys 0m0.040s 00:28:30.850 01:14:19 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:30.850 01:14:19 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:30.850 ************************************ 00:28:30.850 END TEST dma 00:28:30.850 ************************************ 00:28:30.850 01:14:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:30.850 01:14:19 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:30.850 01:14:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:30.850 01:14:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:30.850 01:14:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:30.850 ************************************ 00:28:30.850 START TEST nvmf_identify 00:28:30.850 ************************************ 00:28:30.850 01:14:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:30.850 * Looking for test storage... 00:28:30.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:30.850 01:14:19 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.850 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:30.850 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.850 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.850 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.850 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.850 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.850 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.850 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.850 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.850 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.850 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.850 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:30.850 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:30.850 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.850 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:30.851 01:14:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:32.757 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:32.757 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:32.757 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:32.757 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:32.757 01:14:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:32.757 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:32.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:32.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:28:32.757 00:28:32.757 --- 10.0.0.2 ping statistics --- 00:28:32.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.757 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:28:32.757 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:32.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:32.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:28:32.757 00:28:32.757 --- 10.0.0.1 ping statistics --- 00:28:32.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.757 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:28:32.757 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:32.757 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:32.757 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:32.757 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:32.757 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:32.757 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:32.757 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:32.758 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:32.758 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:32.758 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:32.758 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:32.758 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:32.758 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1237902 00:28:32.758 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:32.758 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:32.758 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1237902 00:28:32.758 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1237902 ']' 00:28:32.758 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.758 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:32.758 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.758 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:32.758 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:32.758 [2024-07-14 01:14:22.103495] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:32.758 [2024-07-14 01:14:22.103589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.758 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.016 [2024-07-14 01:14:22.173479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:33.016 [2024-07-14 01:14:22.267749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.016 [2024-07-14 01:14:22.267804] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.016 [2024-07-14 01:14:22.267821] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.016 [2024-07-14 01:14:22.267834] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.016 [2024-07-14 01:14:22.267845] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.016 [2024-07-14 01:14:22.271889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.016 [2024-07-14 01:14:22.271943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:33.016 [2024-07-14 01:14:22.272012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:33.016 [2024-07-14 01:14:22.272016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.016 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:33.016 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:28:33.016 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:33.016 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.016 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:33.016 [2024-07-14 01:14:22.398470] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.016 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.016 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:33.016 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:33.016 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:33.016 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:33.016 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.016 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:33.276 Malloc0 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:33.276 [2024-07-14 01:14:22.469560] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:33.276 [ 00:28:33.276 { 00:28:33.276 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:33.276 "subtype": "Discovery", 00:28:33.276 "listen_addresses": [ 00:28:33.276 { 00:28:33.276 "trtype": "TCP", 00:28:33.276 "adrfam": "IPv4", 00:28:33.276 "traddr": "10.0.0.2", 00:28:33.276 "trsvcid": "4420" 00:28:33.276 } 00:28:33.276 ], 00:28:33.276 "allow_any_host": true, 00:28:33.276 "hosts": [] 00:28:33.276 }, 00:28:33.276 { 00:28:33.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:33.276 "subtype": "NVMe", 00:28:33.276 "listen_addresses": [ 00:28:33.276 { 00:28:33.276 "trtype": "TCP", 00:28:33.276 "adrfam": "IPv4", 00:28:33.276 "traddr": "10.0.0.2", 00:28:33.276 "trsvcid": "4420" 00:28:33.276 } 00:28:33.276 ], 00:28:33.276 "allow_any_host": true, 00:28:33.276 "hosts": [], 00:28:33.276 "serial_number": "SPDK00000000000001", 00:28:33.276 "model_number": "SPDK bdev Controller", 00:28:33.276 "max_namespaces": 32, 00:28:33.276 "min_cntlid": 1, 00:28:33.276 "max_cntlid": 65519, 00:28:33.276 "namespaces": [ 00:28:33.276 { 00:28:33.276 "nsid": 1, 00:28:33.276 "bdev_name": "Malloc0", 00:28:33.276 "name": "Malloc0", 00:28:33.276 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:33.276 "eui64": "ABCDEF0123456789", 00:28:33.276 "uuid": "4754bc0e-5348-44e9-a4db-407ecb58f8f2" 00:28:33.276 } 00:28:33.276 ] 00:28:33.276 } 00:28:33.276 ] 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.276 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:33.276 [2024-07-14 01:14:22.509614] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:33.276 [2024-07-14 01:14:22.509666] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237928 ] 00:28:33.276 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.276 [2024-07-14 01:14:22.541988] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:33.276 [2024-07-14 01:14:22.542051] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:33.276 [2024-07-14 01:14:22.542061] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:33.276 [2024-07-14 01:14:22.542077] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:33.276 [2024-07-14 01:14:22.542086] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:33.276 [2024-07-14 01:14:22.545919] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:33.276 [2024-07-14 01:14:22.545988] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1137ae0 0 00:28:33.276 [2024-07-14 01:14:22.553894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:33.276 [2024-07-14 01:14:22.553915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:33.276 [2024-07-14 01:14:22.553924] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:33.276 [2024-07-14 01:14:22.553930] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:33.276 [2024-07-14 01:14:22.553996] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.276 [2024-07-14 01:14:22.554010] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.276 [2024-07-14 01:14:22.554018] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1137ae0) 00:28:33.276 [2024-07-14 01:14:22.554047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:33.276 [2024-07-14 01:14:22.554073] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e240, cid 0, qid 0 00:28:33.276 [2024-07-14 01:14:22.561882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.276 [2024-07-14 01:14:22.561900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.276 [2024-07-14 01:14:22.561907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.276 [2024-07-14 01:14:22.561914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e240) on tqpair=0x1137ae0 00:28:33.276 [2024-07-14 01:14:22.561929] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:33.276 [2024-07-14 01:14:22.561955] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:33.276 [2024-07-14 01:14:22.561964] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:33.276 [2024-07-14 01:14:22.561988] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.276 [2024-07-14 01:14:22.561996] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.276 [2024-07-14 01:14:22.562003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1137ae0) 00:28:33.276 [2024-07-14 01:14:22.562014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.276 [2024-07-14 01:14:22.562038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e240, cid 0, qid 0 00:28:33.276 [2024-07-14 01:14:22.562202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.276 [2024-07-14 01:14:22.562217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.276 [2024-07-14 01:14:22.562224] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.276 [2024-07-14 01:14:22.562231] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e240) on tqpair=0x1137ae0 00:28:33.276 [2024-07-14 01:14:22.562247] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:33.276 [2024-07-14 01:14:22.562261] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:33.276 [2024-07-14 01:14:22.562274] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.276 [2024-07-14 01:14:22.562281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.276 [2024-07-14 01:14:22.562288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1137ae0) 00:28:33.276 [2024-07-14 01:14:22.562298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.276 [2024-07-14 01:14:22.562319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e240, cid 0, qid 0 00:28:33.276 [2024-07-14 01:14:22.562496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.276 [2024-07-14 01:14:22.562509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.276 [2024-07-14 01:14:22.562516] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.276 [2024-07-14 01:14:22.562523] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e240) on tqpair=0x1137ae0 00:28:33.276 [2024-07-14 01:14:22.562532] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:33.276 [2024-07-14 01:14:22.562546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:33.276 [2024-07-14 01:14:22.562558] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.276 [2024-07-14 01:14:22.562565] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.276 [2024-07-14 01:14:22.562571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1137ae0) 00:28:33.276 [2024-07-14 01:14:22.562582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.276 [2024-07-14 01:14:22.562602] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e240, cid 0, qid 0 00:28:33.276 [2024-07-14 01:14:22.562744] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.276 [2024-07-14 01:14:22.562760] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.276 [2024-07-14 01:14:22.562766] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.276 [2024-07-14 01:14:22.562773] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e240) on tqpair=0x1137ae0 00:28:33.276 [2024-07-14 01:14:22.562783] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:33.276 [2024-07-14 01:14:22.562800] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.276 [2024-07-14 01:14:22.562808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.276 [2024-07-14 01:14:22.562815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1137ae0) 00:28:33.276 [2024-07-14 01:14:22.562825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.277 [2024-07-14 01:14:22.562845] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e240, cid 0, qid 0 00:28:33.277 [2024-07-14 01:14:22.562987] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.277 [2024-07-14 01:14:22.563001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.277 [2024-07-14 01:14:22.563008] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.563015] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e240) on tqpair=0x1137ae0 00:28:33.277 [2024-07-14 01:14:22.563023] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:33.277 [2024-07-14 01:14:22.563032] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:33.277 [2024-07-14 01:14:22.563049] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:33.277 [2024-07-14 01:14:22.563160] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:33.277 [2024-07-14 01:14:22.563184] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:33.277 [2024-07-14 01:14:22.563198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.563205] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.563211] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1137ae0) 00:28:33.277 [2024-07-14 01:14:22.563222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.277 [2024-07-14 01:14:22.563242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e240, cid 0, qid 0 00:28:33.277 [2024-07-14 01:14:22.563402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.277 [2024-07-14 01:14:22.563417] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.277 [2024-07-14 01:14:22.563424] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.563431] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e240) on tqpair=0x1137ae0 00:28:33.277 [2024-07-14 01:14:22.563440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:33.277 [2024-07-14 01:14:22.563457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.563465] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.563472] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1137ae0) 00:28:33.277 [2024-07-14 01:14:22.563482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.277 [2024-07-14 01:14:22.563502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e240, cid 0, qid 0 00:28:33.277 [2024-07-14 01:14:22.563640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.277 [2024-07-14 01:14:22.563655] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.277 [2024-07-14 01:14:22.563662] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.563668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e240) on tqpair=0x1137ae0 00:28:33.277 [2024-07-14 01:14:22.563677] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:33.277 [2024-07-14 01:14:22.563685] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:33.277 [2024-07-14 01:14:22.563699] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:33.277 [2024-07-14 01:14:22.563713] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:33.277 [2024-07-14 01:14:22.563729] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.563737] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1137ae0) 00:28:33.277 [2024-07-14 01:14:22.563748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.277 [2024-07-14 01:14:22.563768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e240, cid 0, qid 0 00:28:33.277 [2024-07-14 01:14:22.563992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:33.277 [2024-07-14 01:14:22.564010] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:33.277 [2024-07-14 01:14:22.564018] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.564025] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1137ae0): datao=0, datal=4096, cccid=0 00:28:33.277 [2024-07-14 01:14:22.564033] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x118e240) on tqpair(0x1137ae0): expected_datao=0, payload_size=4096 00:28:33.277 [2024-07-14 01:14:22.564041] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.564068] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.564078] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.605000] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.277 [2024-07-14 01:14:22.605018] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.277 [2024-07-14 01:14:22.605025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.605032] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e240) on tqpair=0x1137ae0 00:28:33.277 [2024-07-14 01:14:22.605045] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:33.277 [2024-07-14 01:14:22.605059] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:33.277 [2024-07-14 01:14:22.605067] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:33.277 [2024-07-14 01:14:22.605076] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:33.277 [2024-07-14 01:14:22.605085] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:33.277 [2024-07-14 01:14:22.605093] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:33.277 [2024-07-14 01:14:22.605107] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:33.277 [2024-07-14 01:14:22.605120] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.605128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.605135] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1137ae0) 00:28:33.277 [2024-07-14 01:14:22.605150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:33.277 [2024-07-14 01:14:22.605172] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e240, cid 0, qid 0 00:28:33.277 [2024-07-14 01:14:22.605327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.277 [2024-07-14 01:14:22.605339] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.277 [2024-07-14 01:14:22.605346] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.605353] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e240) on tqpair=0x1137ae0 00:28:33.277 [2024-07-14 01:14:22.605365] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.605372] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.605379] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1137ae0) 00:28:33.277 [2024-07-14 01:14:22.605388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.277 [2024-07-14 01:14:22.605398] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.605405] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.605411] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1137ae0) 00:28:33.277 [2024-07-14 01:14:22.605424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.277 [2024-07-14 01:14:22.605434] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.605441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.605447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1137ae0) 00:28:33.277 [2024-07-14 01:14:22.605456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.277 [2024-07-14 01:14:22.605466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.605472] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.605479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1137ae0) 00:28:33.277 [2024-07-14 01:14:22.605487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.277 [2024-07-14 01:14:22.605496] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:33.277 [2024-07-14 01:14:22.605515] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:33.277 [2024-07-14 01:14:22.605527] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.605534] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1137ae0) 00:28:33.277 [2024-07-14 01:14:22.605544] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.277 [2024-07-14 01:14:22.605566] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e240, cid 0, qid 0 00:28:33.277 [2024-07-14 01:14:22.605577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e3c0, cid 1, qid 0 00:28:33.277 [2024-07-14 01:14:22.605585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e540, cid 2, qid 0 00:28:33.277 [2024-07-14 01:14:22.605593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e6c0, cid 3, qid 0 00:28:33.277 [2024-07-14 01:14:22.605600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e840, cid 4, qid 0 00:28:33.277 [2024-07-14 01:14:22.605772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.277 [2024-07-14 01:14:22.605787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.277 [2024-07-14 01:14:22.605794] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.605800] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e840) on tqpair=0x1137ae0 00:28:33.277 [2024-07-14 01:14:22.605810] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:33.277 [2024-07-14 01:14:22.605819] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:33.277 [2024-07-14 01:14:22.605836] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.277 [2024-07-14 01:14:22.605845] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1137ae0) 00:28:33.277 [2024-07-14 01:14:22.605856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.277 [2024-07-14 01:14:22.609894] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e840, cid 4, qid 0 00:28:33.277 [2024-07-14 01:14:22.610104] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:33.277 [2024-07-14 01:14:22.610117] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:33.277 [2024-07-14 01:14:22.610124] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.610130] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1137ae0): datao=0, datal=4096, cccid=4 00:28:33.278 [2024-07-14 01:14:22.610143] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x118e840) on tqpair(0x1137ae0): expected_datao=0, payload_size=4096 00:28:33.278 [2024-07-14 01:14:22.610151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.610161] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.610168] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.610205] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.278 [2024-07-14 01:14:22.610216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.278 [2024-07-14 01:14:22.610222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.610229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e840) on tqpair=0x1137ae0 00:28:33.278 [2024-07-14 01:14:22.610247] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:33.278 [2024-07-14 01:14:22.610284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.610295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1137ae0) 00:28:33.278 [2024-07-14 01:14:22.610306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.278 [2024-07-14 01:14:22.610317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.610324] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.610330] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1137ae0) 00:28:33.278 [2024-07-14 01:14:22.610339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.278 [2024-07-14 01:14:22.610366] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e840, cid 4, qid 0 00:28:33.278 [2024-07-14 01:14:22.610378] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e9c0, cid 5, qid 0 00:28:33.278 [2024-07-14 01:14:22.610555] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:33.278 [2024-07-14 01:14:22.610567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:33.278 [2024-07-14 01:14:22.610574] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.610581] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1137ae0): datao=0, datal=1024, cccid=4 00:28:33.278 [2024-07-14 01:14:22.610588] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x118e840) on tqpair(0x1137ae0): expected_datao=0, payload_size=1024 00:28:33.278 [2024-07-14 01:14:22.610596] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.610605] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.610613] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.610621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.278 [2024-07-14 01:14:22.610630] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.278 [2024-07-14 01:14:22.610637] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.610643] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e9c0) on tqpair=0x1137ae0 00:28:33.278 [2024-07-14 01:14:22.651008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.278 [2024-07-14 01:14:22.651027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.278 [2024-07-14 01:14:22.651034] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.651041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e840) on tqpair=0x1137ae0 00:28:33.278 [2024-07-14 01:14:22.651058] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.651067] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1137ae0) 00:28:33.278 [2024-07-14 01:14:22.651078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.278 [2024-07-14 01:14:22.651112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e840, cid 4, qid 0 00:28:33.278 [2024-07-14 01:14:22.651283] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:33.278 [2024-07-14 01:14:22.651295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:33.278 [2024-07-14 01:14:22.651302] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.651308] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1137ae0): datao=0, datal=3072, cccid=4 00:28:33.278 [2024-07-14 01:14:22.651316] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x118e840) on tqpair(0x1137ae0): expected_datao=0, payload_size=3072 00:28:33.278 [2024-07-14 01:14:22.651323] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.651333] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.651341] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.651390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.278 [2024-07-14 01:14:22.651401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.278 [2024-07-14 01:14:22.651408] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.651414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e840) on tqpair=0x1137ae0 00:28:33.278 [2024-07-14 01:14:22.651428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.651437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1137ae0) 00:28:33.278 [2024-07-14 01:14:22.651447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.278 [2024-07-14 01:14:22.651474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e840, cid 4, qid 0 00:28:33.278 [2024-07-14 01:14:22.651638] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:33.278 [2024-07-14 01:14:22.651653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:33.278 [2024-07-14 01:14:22.651660] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.651666] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1137ae0): datao=0, datal=8, cccid=4 00:28:33.278 [2024-07-14 01:14:22.651674] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x118e840) on tqpair(0x1137ae0): expected_datao=0, payload_size=8 00:28:33.278 [2024-07-14 01:14:22.651681] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.651691] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:33.278 [2024-07-14 01:14:22.651698] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:33.540 [2024-07-14 01:14:22.692003] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.540 [2024-07-14 01:14:22.692024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.540 [2024-07-14 01:14:22.692032] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.540 [2024-07-14 01:14:22.692040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e840) on tqpair=0x1137ae0 00:28:33.540 ===================================================== 00:28:33.540 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:33.540 ===================================================== 00:28:33.540 Controller Capabilities/Features 00:28:33.540 ================================ 00:28:33.540 Vendor ID: 0000 00:28:33.540 Subsystem Vendor ID: 0000 00:28:33.540 Serial Number: .................... 00:28:33.540 Model Number: ........................................ 00:28:33.540 Firmware Version: 24.09 00:28:33.540 Recommended Arb Burst: 0 00:28:33.540 IEEE OUI Identifier: 00 00 00 00:28:33.540 Multi-path I/O 00:28:33.540 May have multiple subsystem ports: No 00:28:33.540 May have multiple controllers: No 00:28:33.540 Associated with SR-IOV VF: No 00:28:33.540 Max Data Transfer Size: 131072 00:28:33.540 Max Number of Namespaces: 0 00:28:33.540 Max Number of I/O Queues: 1024 00:28:33.540 NVMe Specification Version (VS): 1.3 00:28:33.540 NVMe Specification Version (Identify): 1.3 00:28:33.540 Maximum Queue Entries: 128 00:28:33.540 Contiguous Queues Required: Yes 00:28:33.540 Arbitration Mechanisms Supported 00:28:33.540 Weighted Round Robin: Not Supported 00:28:33.540 Vendor Specific: Not Supported 00:28:33.540 Reset Timeout: 15000 ms 00:28:33.540 Doorbell Stride: 4 bytes 00:28:33.540 NVM Subsystem Reset: Not Supported 00:28:33.540 Command Sets Supported 00:28:33.540 NVM Command Set: Supported 00:28:33.540 Boot Partition: Not Supported 00:28:33.540 Memory Page Size Minimum: 4096 bytes 00:28:33.540 Memory Page Size Maximum: 4096 bytes 00:28:33.540 Persistent Memory Region: Not Supported 00:28:33.540 Optional Asynchronous Events Supported 00:28:33.540 Namespace Attribute Notices: Not Supported 00:28:33.540 Firmware Activation Notices: Not Supported 00:28:33.540 ANA Change Notices: Not Supported 00:28:33.540 PLE Aggregate Log Change Notices: Not Supported 00:28:33.540 LBA Status Info Alert Notices: Not Supported 00:28:33.540 EGE Aggregate Log Change Notices: Not Supported 00:28:33.540 Normal NVM Subsystem Shutdown event: Not Supported 00:28:33.540 Zone Descriptor Change Notices: Not Supported 00:28:33.540 Discovery Log Change Notices: Supported 00:28:33.540 Controller Attributes 00:28:33.540 128-bit Host Identifier: Not Supported 00:28:33.540 Non-Operational Permissive Mode: Not Supported 00:28:33.540 NVM Sets: Not Supported 00:28:33.540 Read Recovery Levels: Not Supported 00:28:33.540 Endurance Groups: Not Supported 00:28:33.540 Predictable Latency Mode: Not Supported 00:28:33.540 Traffic Based Keep ALive: Not Supported 00:28:33.540 Namespace Granularity: Not Supported 00:28:33.540 SQ Associations: Not Supported 00:28:33.540 UUID List: Not Supported 00:28:33.540 Multi-Domain Subsystem: Not Supported 00:28:33.540 Fixed Capacity Management: Not Supported 00:28:33.540 Variable Capacity Management: Not Supported 00:28:33.541 Delete Endurance Group: Not Supported 00:28:33.541 Delete NVM Set: Not Supported 00:28:33.541 Extended LBA Formats Supported: Not Supported 00:28:33.541 Flexible Data Placement Supported: Not Supported 00:28:33.541 00:28:33.541 Controller Memory Buffer Support 00:28:33.541 ================================ 00:28:33.541 Supported: No 00:28:33.541 00:28:33.541 Persistent Memory Region Support 00:28:33.541 ================================ 00:28:33.541 Supported: No 00:28:33.541 00:28:33.541 Admin Command Set Attributes 00:28:33.541 ============================ 00:28:33.541 Security Send/Receive: Not Supported 00:28:33.541 Format NVM: Not Supported 00:28:33.541 Firmware Activate/Download: Not Supported 00:28:33.541 Namespace Management: Not Supported 00:28:33.541 Device Self-Test: Not Supported 00:28:33.541 Directives: Not Supported 00:28:33.541 NVMe-MI: Not Supported 00:28:33.541 Virtualization Management: Not Supported 00:28:33.541 Doorbell Buffer Config: Not Supported 00:28:33.541 Get LBA Status Capability: Not Supported 00:28:33.541 Command & Feature Lockdown Capability: Not Supported 00:28:33.541 Abort Command Limit: 1 00:28:33.541 Async Event Request Limit: 4 00:28:33.541 Number of Firmware Slots: N/A 00:28:33.541 Firmware Slot 1 Read-Only: N/A 00:28:33.541 Firmware Activation Without Reset: N/A 00:28:33.541 Multiple Update Detection Support: N/A 00:28:33.541 Firmware Update Granularity: No Information Provided 00:28:33.541 Per-Namespace SMART Log: No 00:28:33.541 Asymmetric Namespace Access Log Page: Not Supported 00:28:33.541 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:33.541 Command Effects Log Page: Not Supported 00:28:33.541 Get Log Page Extended Data: Supported 00:28:33.541 Telemetry Log Pages: Not Supported 00:28:33.541 Persistent Event Log Pages: Not Supported 00:28:33.541 Supported Log Pages Log Page: May Support 00:28:33.541 Commands Supported & Effects Log Page: Not Supported 00:28:33.541 Feature Identifiers & Effects Log Page:May Support 00:28:33.541 NVMe-MI Commands & Effects Log Page: May Support 00:28:33.541 Data Area 4 for Telemetry Log: Not Supported 00:28:33.541 Error Log Page Entries Supported: 128 00:28:33.541 Keep Alive: Not Supported 00:28:33.541 00:28:33.541 NVM Command Set Attributes 00:28:33.541 ========================== 00:28:33.541 Submission Queue Entry Size 00:28:33.541 Max: 1 00:28:33.541 Min: 1 00:28:33.541 Completion Queue Entry Size 00:28:33.541 Max: 1 00:28:33.541 Min: 1 00:28:33.541 Number of Namespaces: 0 00:28:33.541 Compare Command: Not Supported 00:28:33.541 Write Uncorrectable Command: Not Supported 00:28:33.541 Dataset Management Command: Not Supported 00:28:33.541 Write Zeroes Command: Not Supported 00:28:33.541 Set Features Save Field: Not Supported 00:28:33.541 Reservations: Not Supported 00:28:33.541 Timestamp: Not Supported 00:28:33.541 Copy: Not Supported 00:28:33.541 Volatile Write Cache: Not Present 00:28:33.541 Atomic Write Unit (Normal): 1 00:28:33.541 Atomic Write Unit (PFail): 1 00:28:33.541 Atomic Compare & Write Unit: 1 00:28:33.541 Fused Compare & Write: Supported 00:28:33.541 Scatter-Gather List 00:28:33.541 SGL Command Set: Supported 00:28:33.541 SGL Keyed: Supported 00:28:33.541 SGL Bit Bucket Descriptor: Not Supported 00:28:33.541 SGL Metadata Pointer: Not Supported 00:28:33.541 Oversized SGL: Not Supported 00:28:33.541 SGL Metadata Address: Not Supported 00:28:33.541 SGL Offset: Supported 00:28:33.541 Transport SGL Data Block: Not Supported 00:28:33.541 Replay Protected Memory Block: Not Supported 00:28:33.541 00:28:33.541 Firmware Slot Information 00:28:33.541 ========================= 00:28:33.541 Active slot: 0 00:28:33.541 00:28:33.541 00:28:33.541 Error Log 00:28:33.541 ========= 00:28:33.541 00:28:33.541 Active Namespaces 00:28:33.541 ================= 00:28:33.541 Discovery Log Page 00:28:33.541 ================== 00:28:33.541 Generation Counter: 2 00:28:33.541 Number of Records: 2 00:28:33.541 Record Format: 0 00:28:33.541 00:28:33.541 Discovery Log Entry 0 00:28:33.541 ---------------------- 00:28:33.541 Transport Type: 3 (TCP) 00:28:33.541 Address Family: 1 (IPv4) 00:28:33.541 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:33.541 Entry Flags: 00:28:33.541 Duplicate Returned Information: 1 00:28:33.541 Explicit Persistent Connection Support for Discovery: 1 00:28:33.541 Transport Requirements: 00:28:33.541 Secure Channel: Not Required 00:28:33.541 Port ID: 0 (0x0000) 00:28:33.541 Controller ID: 65535 (0xffff) 00:28:33.541 Admin Max SQ Size: 128 00:28:33.541 Transport Service Identifier: 4420 00:28:33.541 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:33.541 Transport Address: 10.0.0.2 00:28:33.541 Discovery Log Entry 1 00:28:33.541 ---------------------- 00:28:33.541 Transport Type: 3 (TCP) 00:28:33.541 Address Family: 1 (IPv4) 00:28:33.541 Subsystem Type: 2 (NVM Subsystem) 00:28:33.541 Entry Flags: 00:28:33.541 Duplicate Returned Information: 0 00:28:33.541 Explicit Persistent Connection Support for Discovery: 0 00:28:33.541 Transport Requirements: 00:28:33.541 Secure Channel: Not Required 00:28:33.541 Port ID: 0 (0x0000) 00:28:33.541 Controller ID: 65535 (0xffff) 00:28:33.541 Admin Max SQ Size: 128 00:28:33.541 Transport Service Identifier: 4420 00:28:33.541 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:33.541 Transport Address: 10.0.0.2 [2024-07-14 01:14:22.692165] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:33.541 [2024-07-14 01:14:22.692188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e240) on tqpair=0x1137ae0 00:28:33.541 [2024-07-14 01:14:22.692200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.541 [2024-07-14 01:14:22.692209] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e3c0) on tqpair=0x1137ae0 00:28:33.541 [2024-07-14 01:14:22.692217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.541 [2024-07-14 01:14:22.692225] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e540) on tqpair=0x1137ae0 00:28:33.541 [2024-07-14 01:14:22.692236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.541 [2024-07-14 01:14:22.692244] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e6c0) on tqpair=0x1137ae0 00:28:33.541 [2024-07-14 01:14:22.692252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.541 [2024-07-14 01:14:22.692270] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.541 [2024-07-14 01:14:22.692279] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.541 [2024-07-14 01:14:22.692285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1137ae0) 00:28:33.541 [2024-07-14 01:14:22.692296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.541 [2024-07-14 01:14:22.692321] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e6c0, cid 3, qid 0 00:28:33.541 [2024-07-14 01:14:22.692457] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.541 [2024-07-14 01:14:22.692469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.541 [2024-07-14 01:14:22.692475] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.541 [2024-07-14 01:14:22.692482] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e6c0) on tqpair=0x1137ae0 00:28:33.541 [2024-07-14 01:14:22.692494] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.541 [2024-07-14 01:14:22.692501] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.541 [2024-07-14 01:14:22.692507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1137ae0) 00:28:33.541 [2024-07-14 01:14:22.692518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.541 [2024-07-14 01:14:22.692543] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e6c0, cid 3, qid 0 00:28:33.541 [2024-07-14 01:14:22.692693] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.541 [2024-07-14 01:14:22.692705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.541 [2024-07-14 01:14:22.692711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.541 [2024-07-14 01:14:22.692718] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e6c0) on tqpair=0x1137ae0 00:28:33.541 [2024-07-14 01:14:22.692727] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:33.541 [2024-07-14 01:14:22.692735] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:33.541 [2024-07-14 01:14:22.692750] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.541 [2024-07-14 01:14:22.692759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.541 [2024-07-14 01:14:22.692765] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1137ae0) 00:28:33.541 [2024-07-14 01:14:22.692775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.541 [2024-07-14 01:14:22.692795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e6c0, cid 3, qid 0 00:28:33.541 [2024-07-14 01:14:22.696881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.541 [2024-07-14 01:14:22.696905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.541 [2024-07-14 01:14:22.696912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.541 [2024-07-14 01:14:22.696919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e6c0) on tqpair=0x1137ae0 00:28:33.541 [2024-07-14 01:14:22.696938] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.541 [2024-07-14 01:14:22.696963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.696970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1137ae0) 00:28:33.542 [2024-07-14 01:14:22.696985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.542 [2024-07-14 01:14:22.697008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118e6c0, cid 3, qid 0 00:28:33.542 [2024-07-14 01:14:22.697165] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.542 [2024-07-14 01:14:22.697176] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.542 [2024-07-14 01:14:22.697183] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.697190] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118e6c0) on tqpair=0x1137ae0 00:28:33.542 [2024-07-14 01:14:22.697203] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:28:33.542 00:28:33.542 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:33.542 [2024-07-14 01:14:22.729137] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:33.542 [2024-07-14 01:14:22.729198] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237930 ] 00:28:33.542 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.542 [2024-07-14 01:14:22.762516] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:33.542 [2024-07-14 01:14:22.762566] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:33.542 [2024-07-14 01:14:22.762576] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:33.542 [2024-07-14 01:14:22.762588] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:33.542 [2024-07-14 01:14:22.762597] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:33.542 [2024-07-14 01:14:22.762817] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:33.542 [2024-07-14 01:14:22.762877] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7eaae0 0 00:28:33.542 [2024-07-14 01:14:22.768881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:33.542 [2024-07-14 01:14:22.768898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:33.542 [2024-07-14 01:14:22.768905] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:33.542 [2024-07-14 01:14:22.768911] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:33.542 [2024-07-14 01:14:22.768962] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.768974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.768981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eaae0) 00:28:33.542 [2024-07-14 01:14:22.768994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:33.542 [2024-07-14 01:14:22.769020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841240, cid 0, qid 0 00:28:33.542 [2024-07-14 01:14:22.776880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.542 [2024-07-14 01:14:22.776897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.542 [2024-07-14 01:14:22.776904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.776911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841240) on tqpair=0x7eaae0 00:28:33.542 [2024-07-14 01:14:22.776947] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:33.542 [2024-07-14 01:14:22.776959] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:33.542 [2024-07-14 01:14:22.776969] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:33.542 [2024-07-14 01:14:22.776986] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.776994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.777001] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eaae0) 00:28:33.542 [2024-07-14 01:14:22.777012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.542 [2024-07-14 01:14:22.777036] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841240, cid 0, qid 0 00:28:33.542 [2024-07-14 01:14:22.777215] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.542 [2024-07-14 01:14:22.777227] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.542 [2024-07-14 01:14:22.777234] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.777241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841240) on tqpair=0x7eaae0 00:28:33.542 [2024-07-14 01:14:22.777249] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:33.542 [2024-07-14 01:14:22.777262] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:33.542 [2024-07-14 01:14:22.777274] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.777282] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.777288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eaae0) 00:28:33.542 [2024-07-14 01:14:22.777298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.542 [2024-07-14 01:14:22.777319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841240, cid 0, qid 0 00:28:33.542 [2024-07-14 01:14:22.777476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.542 [2024-07-14 01:14:22.777491] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.542 [2024-07-14 01:14:22.777498] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.777505] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841240) on tqpair=0x7eaae0 00:28:33.542 [2024-07-14 01:14:22.777513] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:33.542 [2024-07-14 01:14:22.777527] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:33.542 [2024-07-14 01:14:22.777539] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.777547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.777553] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eaae0) 00:28:33.542 [2024-07-14 01:14:22.777563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.542 [2024-07-14 01:14:22.777584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841240, cid 0, qid 0 00:28:33.542 [2024-07-14 01:14:22.777763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.542 [2024-07-14 01:14:22.777778] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.542 [2024-07-14 01:14:22.777784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.777791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841240) on tqpair=0x7eaae0 00:28:33.542 [2024-07-14 01:14:22.777800] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:33.542 [2024-07-14 01:14:22.777821] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.777831] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.777837] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eaae0) 00:28:33.542 [2024-07-14 01:14:22.777848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.542 [2024-07-14 01:14:22.777877] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841240, cid 0, qid 0 00:28:33.542 [2024-07-14 01:14:22.778017] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.542 [2024-07-14 01:14:22.778032] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.542 [2024-07-14 01:14:22.778039] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.778046] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841240) on tqpair=0x7eaae0 00:28:33.542 [2024-07-14 01:14:22.778054] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:33.542 [2024-07-14 01:14:22.778062] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:33.542 [2024-07-14 01:14:22.778076] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:33.542 [2024-07-14 01:14:22.778186] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:33.542 [2024-07-14 01:14:22.778193] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:33.542 [2024-07-14 01:14:22.778205] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.778212] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.778219] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eaae0) 00:28:33.542 [2024-07-14 01:14:22.778229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.542 [2024-07-14 01:14:22.778250] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841240, cid 0, qid 0 00:28:33.542 [2024-07-14 01:14:22.778430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.542 [2024-07-14 01:14:22.778443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.542 [2024-07-14 01:14:22.778450] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.778457] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841240) on tqpair=0x7eaae0 00:28:33.542 [2024-07-14 01:14:22.778465] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:33.542 [2024-07-14 01:14:22.778482] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.778490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.778497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eaae0) 00:28:33.542 [2024-07-14 01:14:22.778508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.542 [2024-07-14 01:14:22.778528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841240, cid 0, qid 0 00:28:33.542 [2024-07-14 01:14:22.778662] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.542 [2024-07-14 01:14:22.778674] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.542 [2024-07-14 01:14:22.778681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.542 [2024-07-14 01:14:22.778687] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841240) on tqpair=0x7eaae0 00:28:33.542 [2024-07-14 01:14:22.778695] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:33.542 [2024-07-14 01:14:22.778707] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:33.542 [2024-07-14 01:14:22.778720] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:33.542 [2024-07-14 01:14:22.778738] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:33.543 [2024-07-14 01:14:22.778751] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.778759] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eaae0) 00:28:33.543 [2024-07-14 01:14:22.778769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.543 [2024-07-14 01:14:22.778790] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841240, cid 0, qid 0 00:28:33.543 [2024-07-14 01:14:22.779007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:33.543 [2024-07-14 01:14:22.779023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:33.543 [2024-07-14 01:14:22.779030] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.779037] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7eaae0): datao=0, datal=4096, cccid=0 00:28:33.543 [2024-07-14 01:14:22.779045] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x841240) on tqpair(0x7eaae0): expected_datao=0, payload_size=4096 00:28:33.543 [2024-07-14 01:14:22.779052] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.779074] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.779083] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.820030] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.543 [2024-07-14 01:14:22.820048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.543 [2024-07-14 01:14:22.820055] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.820062] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841240) on tqpair=0x7eaae0 00:28:33.543 [2024-07-14 01:14:22.820073] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:33.543 [2024-07-14 01:14:22.820086] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:33.543 [2024-07-14 01:14:22.820095] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:33.543 [2024-07-14 01:14:22.820102] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:33.543 [2024-07-14 01:14:22.820109] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:33.543 [2024-07-14 01:14:22.820117] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:33.543 [2024-07-14 01:14:22.820132] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:33.543 [2024-07-14 01:14:22.820144] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.820151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.820158] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eaae0) 00:28:33.543 [2024-07-14 01:14:22.820169] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:33.543 [2024-07-14 01:14:22.820192] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841240, cid 0, qid 0 00:28:33.543 [2024-07-14 01:14:22.820329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.543 [2024-07-14 01:14:22.820345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.543 [2024-07-14 01:14:22.820353] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.820360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841240) on tqpair=0x7eaae0 00:28:33.543 [2024-07-14 01:14:22.820370] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.820377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.820383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7eaae0) 00:28:33.543 [2024-07-14 01:14:22.820393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.543 [2024-07-14 01:14:22.820403] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.820410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.820416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7eaae0) 00:28:33.543 [2024-07-14 01:14:22.820425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.543 [2024-07-14 01:14:22.820434] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.820441] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.820447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7eaae0) 00:28:33.543 [2024-07-14 01:14:22.820456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.543 [2024-07-14 01:14:22.820465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.820486] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.820492] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eaae0) 00:28:33.543 [2024-07-14 01:14:22.820501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.543 [2024-07-14 01:14:22.820509] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:33.543 [2024-07-14 01:14:22.820527] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:33.543 [2024-07-14 01:14:22.820540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.820547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7eaae0) 00:28:33.543 [2024-07-14 01:14:22.820557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.543 [2024-07-14 01:14:22.820578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841240, cid 0, qid 0 00:28:33.543 [2024-07-14 01:14:22.820604] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8413c0, cid 1, qid 0 00:28:33.543 [2024-07-14 01:14:22.820613] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841540, cid 2, qid 0 00:28:33.543 [2024-07-14 01:14:22.820620] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8416c0, cid 3, qid 0 00:28:33.543 [2024-07-14 01:14:22.820628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841840, cid 4, qid 0 00:28:33.543 [2024-07-14 01:14:22.820820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.543 [2024-07-14 01:14:22.820833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.543 [2024-07-14 01:14:22.820840] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.820846] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841840) on tqpair=0x7eaae0 00:28:33.543 [2024-07-14 01:14:22.820855] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:33.543 [2024-07-14 01:14:22.824878] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:33.543 [2024-07-14 01:14:22.824898] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:33.543 [2024-07-14 01:14:22.824910] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:33.543 [2024-07-14 01:14:22.824921] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.824929] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.824935] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7eaae0) 00:28:33.543 [2024-07-14 01:14:22.824946] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:33.543 [2024-07-14 01:14:22.824967] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841840, cid 4, qid 0 00:28:33.543 [2024-07-14 01:14:22.825127] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.543 [2024-07-14 01:14:22.825140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.543 [2024-07-14 01:14:22.825146] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.825153] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841840) on tqpair=0x7eaae0 00:28:33.543 [2024-07-14 01:14:22.825218] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:33.543 [2024-07-14 01:14:22.825235] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:33.543 [2024-07-14 01:14:22.825250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.825272] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7eaae0) 00:28:33.543 [2024-07-14 01:14:22.825283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.543 [2024-07-14 01:14:22.825304] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841840, cid 4, qid 0 00:28:33.543 [2024-07-14 01:14:22.825506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:33.543 [2024-07-14 01:14:22.825519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:33.543 [2024-07-14 01:14:22.825526] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.825532] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7eaae0): datao=0, datal=4096, cccid=4 00:28:33.543 [2024-07-14 01:14:22.825540] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x841840) on tqpair(0x7eaae0): expected_datao=0, payload_size=4096 00:28:33.543 [2024-07-14 01:14:22.825547] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.825574] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.825583] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.866023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.543 [2024-07-14 01:14:22.866043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.543 [2024-07-14 01:14:22.866050] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.866057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841840) on tqpair=0x7eaae0 00:28:33.543 [2024-07-14 01:14:22.866073] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:33.543 [2024-07-14 01:14:22.866091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:33.543 [2024-07-14 01:14:22.866109] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:33.543 [2024-07-14 01:14:22.866126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.866135] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7eaae0) 00:28:33.543 [2024-07-14 01:14:22.866146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.543 [2024-07-14 01:14:22.866168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841840, cid 4, qid 0 00:28:33.543 [2024-07-14 01:14:22.866337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:33.543 [2024-07-14 01:14:22.866350] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:33.543 [2024-07-14 01:14:22.866357] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.866363] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7eaae0): datao=0, datal=4096, cccid=4 00:28:33.543 [2024-07-14 01:14:22.866371] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x841840) on tqpair(0x7eaae0): expected_datao=0, payload_size=4096 00:28:33.543 [2024-07-14 01:14:22.866378] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.543 [2024-07-14 01:14:22.866388] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.866396] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.866445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.544 [2024-07-14 01:14:22.866456] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.544 [2024-07-14 01:14:22.866463] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.866470] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841840) on tqpair=0x7eaae0 00:28:33.544 [2024-07-14 01:14:22.866491] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:33.544 [2024-07-14 01:14:22.866509] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:33.544 [2024-07-14 01:14:22.866522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.866530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7eaae0) 00:28:33.544 [2024-07-14 01:14:22.866540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.544 [2024-07-14 01:14:22.866561] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841840, cid 4, qid 0 00:28:33.544 [2024-07-14 01:14:22.866716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:33.544 [2024-07-14 01:14:22.866728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:33.544 [2024-07-14 01:14:22.866735] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.866741] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7eaae0): datao=0, datal=4096, cccid=4 00:28:33.544 [2024-07-14 01:14:22.866749] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x841840) on tqpair(0x7eaae0): expected_datao=0, payload_size=4096 00:28:33.544 [2024-07-14 01:14:22.866756] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.866784] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.866793] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.907007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.544 [2024-07-14 01:14:22.907026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.544 [2024-07-14 01:14:22.907033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.907040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841840) on tqpair=0x7eaae0 00:28:33.544 [2024-07-14 01:14:22.907054] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:33.544 [2024-07-14 01:14:22.907073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:33.544 [2024-07-14 01:14:22.907091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:33.544 [2024-07-14 01:14:22.907102] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:33.544 [2024-07-14 01:14:22.907111] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:33.544 [2024-07-14 01:14:22.907120] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:33.544 [2024-07-14 01:14:22.907129] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:33.544 [2024-07-14 01:14:22.907136] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:33.544 [2024-07-14 01:14:22.907145] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:33.544 [2024-07-14 01:14:22.907164] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.907172] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7eaae0) 00:28:33.544 [2024-07-14 01:14:22.907184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.544 [2024-07-14 01:14:22.907194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.907201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.907208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7eaae0) 00:28:33.544 [2024-07-14 01:14:22.907217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:33.544 [2024-07-14 01:14:22.907242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841840, cid 4, qid 0 00:28:33.544 [2024-07-14 01:14:22.907255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8419c0, cid 5, qid 0 00:28:33.544 [2024-07-14 01:14:22.907404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.544 [2024-07-14 01:14:22.907420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.544 [2024-07-14 01:14:22.907426] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.907433] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841840) on tqpair=0x7eaae0 00:28:33.544 [2024-07-14 01:14:22.907443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.544 [2024-07-14 01:14:22.907453] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.544 [2024-07-14 01:14:22.907459] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.907466] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8419c0) on tqpair=0x7eaae0 00:28:33.544 [2024-07-14 01:14:22.907481] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.907490] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7eaae0) 00:28:33.544 [2024-07-14 01:14:22.907501] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.544 [2024-07-14 01:14:22.907537] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8419c0, cid 5, qid 0 00:28:33.544 [2024-07-14 01:14:22.907768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.544 [2024-07-14 01:14:22.907781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.544 [2024-07-14 01:14:22.907787] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.907798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8419c0) on tqpair=0x7eaae0 00:28:33.544 [2024-07-14 01:14:22.907814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.907823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7eaae0) 00:28:33.544 [2024-07-14 01:14:22.907833] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.544 [2024-07-14 01:14:22.907853] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8419c0, cid 5, qid 0 00:28:33.544 [2024-07-14 01:14:22.908012] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.544 [2024-07-14 01:14:22.908026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.544 [2024-07-14 01:14:22.908033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.908040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8419c0) on tqpair=0x7eaae0 00:28:33.544 [2024-07-14 01:14:22.908056] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.908065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7eaae0) 00:28:33.544 [2024-07-14 01:14:22.908075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.544 [2024-07-14 01:14:22.908096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8419c0, cid 5, qid 0 00:28:33.544 [2024-07-14 01:14:22.908245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.544 [2024-07-14 01:14:22.908257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.544 [2024-07-14 01:14:22.908264] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.908271] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8419c0) on tqpair=0x7eaae0 00:28:33.544 [2024-07-14 01:14:22.908294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.908304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7eaae0) 00:28:33.544 [2024-07-14 01:14:22.908315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.544 [2024-07-14 01:14:22.908327] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.908334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7eaae0) 00:28:33.544 [2024-07-14 01:14:22.908344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.544 [2024-07-14 01:14:22.908355] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.908362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x7eaae0) 00:28:33.544 [2024-07-14 01:14:22.908386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.544 [2024-07-14 01:14:22.908398] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.908405] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7eaae0) 00:28:33.544 [2024-07-14 01:14:22.908414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.544 [2024-07-14 01:14:22.908435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8419c0, cid 5, qid 0 00:28:33.544 [2024-07-14 01:14:22.908461] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841840, cid 4, qid 0 00:28:33.544 [2024-07-14 01:14:22.908469] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841b40, cid 6, qid 0 00:28:33.544 [2024-07-14 01:14:22.908476] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841cc0, cid 7, qid 0 00:28:33.544 [2024-07-14 01:14:22.908721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:33.544 [2024-07-14 01:14:22.908737] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:33.544 [2024-07-14 01:14:22.908744] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:33.544 [2024-07-14 01:14:22.908750] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7eaae0): datao=0, datal=8192, cccid=5 00:28:33.544 [2024-07-14 01:14:22.908758] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8419c0) on tqpair(0x7eaae0): expected_datao=0, payload_size=8192 00:28:33.544 [2024-07-14 01:14:22.908766] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.912884] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.912899] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.912908] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:33.545 [2024-07-14 01:14:22.912917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:33.545 [2024-07-14 01:14:22.912924] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.912930] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7eaae0): datao=0, datal=512, cccid=4 00:28:33.545 [2024-07-14 01:14:22.912938] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x841840) on tqpair(0x7eaae0): expected_datao=0, payload_size=512 00:28:33.545 [2024-07-14 01:14:22.912945] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.912955] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.912962] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.912970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:33.545 [2024-07-14 01:14:22.912979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:33.545 [2024-07-14 01:14:22.912985] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.912991] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7eaae0): datao=0, datal=512, cccid=6 00:28:33.545 [2024-07-14 01:14:22.912999] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x841b40) on tqpair(0x7eaae0): expected_datao=0, payload_size=512 00:28:33.545 [2024-07-14 01:14:22.913006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.913015] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.913022] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.913031] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:33.545 [2024-07-14 01:14:22.913040] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:33.545 [2024-07-14 01:14:22.913046] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.913052] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7eaae0): datao=0, datal=4096, cccid=7 00:28:33.545 [2024-07-14 01:14:22.913060] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x841cc0) on tqpair(0x7eaae0): expected_datao=0, payload_size=4096 00:28:33.545 [2024-07-14 01:14:22.913067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.913076] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.913083] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.913095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.545 [2024-07-14 01:14:22.913104] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.545 [2024-07-14 01:14:22.913111] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.913117] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8419c0) on tqpair=0x7eaae0 00:28:33.545 [2024-07-14 01:14:22.913136] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.545 [2024-07-14 01:14:22.913165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.545 [2024-07-14 01:14:22.913171] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.913181] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841840) on tqpair=0x7eaae0 00:28:33.545 [2024-07-14 01:14:22.913196] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.545 [2024-07-14 01:14:22.913205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.545 [2024-07-14 01:14:22.913227] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.913233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841b40) on tqpair=0x7eaae0 00:28:33.545 [2024-07-14 01:14:22.913243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.545 [2024-07-14 01:14:22.913252] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.545 [2024-07-14 01:14:22.913259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.545 [2024-07-14 01:14:22.913265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841cc0) on tqpair=0x7eaae0 00:28:33.545 ===================================================== 00:28:33.545 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:33.545 ===================================================== 00:28:33.545 Controller Capabilities/Features 00:28:33.545 ================================ 00:28:33.545 Vendor ID: 8086 00:28:33.545 Subsystem Vendor ID: 8086 00:28:33.545 Serial Number: SPDK00000000000001 00:28:33.545 Model Number: SPDK bdev Controller 00:28:33.545 Firmware Version: 24.09 00:28:33.545 Recommended Arb Burst: 6 00:28:33.545 IEEE OUI Identifier: e4 d2 5c 00:28:33.545 Multi-path I/O 00:28:33.545 May have multiple subsystem ports: Yes 00:28:33.545 May have multiple controllers: Yes 00:28:33.545 Associated with SR-IOV VF: No 00:28:33.545 Max Data Transfer Size: 131072 00:28:33.545 Max Number of Namespaces: 32 00:28:33.545 Max Number of I/O Queues: 127 00:28:33.545 NVMe Specification Version (VS): 1.3 00:28:33.545 NVMe Specification Version (Identify): 1.3 00:28:33.545 Maximum Queue Entries: 128 00:28:33.545 Contiguous Queues Required: Yes 00:28:33.545 Arbitration Mechanisms Supported 00:28:33.545 Weighted Round Robin: Not Supported 00:28:33.545 Vendor Specific: Not Supported 00:28:33.545 Reset Timeout: 15000 ms 00:28:33.545 Doorbell Stride: 4 bytes 00:28:33.545 NVM Subsystem Reset: Not Supported 00:28:33.545 Command Sets Supported 00:28:33.545 NVM Command Set: Supported 00:28:33.545 Boot Partition: Not Supported 00:28:33.545 Memory Page Size Minimum: 4096 bytes 00:28:33.545 Memory Page Size Maximum: 4096 bytes 00:28:33.545 Persistent Memory Region: Not Supported 00:28:33.545 Optional Asynchronous Events Supported 00:28:33.545 Namespace Attribute Notices: Supported 00:28:33.545 Firmware Activation Notices: Not Supported 00:28:33.545 ANA Change Notices: Not Supported 00:28:33.545 PLE Aggregate Log Change Notices: Not Supported 00:28:33.545 LBA Status Info Alert Notices: Not Supported 00:28:33.545 EGE Aggregate Log Change Notices: Not Supported 00:28:33.545 Normal NVM Subsystem Shutdown event: Not Supported 00:28:33.545 Zone Descriptor Change Notices: Not Supported 00:28:33.545 Discovery Log Change Notices: Not Supported 00:28:33.545 Controller Attributes 00:28:33.545 128-bit Host Identifier: Supported 00:28:33.545 Non-Operational Permissive Mode: Not Supported 00:28:33.545 NVM Sets: Not Supported 00:28:33.545 Read Recovery Levels: Not Supported 00:28:33.545 Endurance Groups: Not Supported 00:28:33.545 Predictable Latency Mode: Not Supported 00:28:33.545 Traffic Based Keep ALive: Not Supported 00:28:33.545 Namespace Granularity: Not Supported 00:28:33.545 SQ Associations: Not Supported 00:28:33.545 UUID List: Not Supported 00:28:33.545 Multi-Domain Subsystem: Not Supported 00:28:33.545 Fixed Capacity Management: Not Supported 00:28:33.545 Variable Capacity Management: Not Supported 00:28:33.545 Delete Endurance Group: Not Supported 00:28:33.545 Delete NVM Set: Not Supported 00:28:33.545 Extended LBA Formats Supported: Not Supported 00:28:33.545 Flexible Data Placement Supported: Not Supported 00:28:33.545 00:28:33.545 Controller Memory Buffer Support 00:28:33.545 ================================ 00:28:33.545 Supported: No 00:28:33.545 00:28:33.545 Persistent Memory Region Support 00:28:33.545 ================================ 00:28:33.545 Supported: No 00:28:33.545 00:28:33.545 Admin Command Set Attributes 00:28:33.545 ============================ 00:28:33.545 Security Send/Receive: Not Supported 00:28:33.545 Format NVM: Not Supported 00:28:33.545 Firmware Activate/Download: Not Supported 00:28:33.545 Namespace Management: Not Supported 00:28:33.545 Device Self-Test: Not Supported 00:28:33.545 Directives: Not Supported 00:28:33.545 NVMe-MI: Not Supported 00:28:33.545 Virtualization Management: Not Supported 00:28:33.545 Doorbell Buffer Config: Not Supported 00:28:33.545 Get LBA Status Capability: Not Supported 00:28:33.545 Command & Feature Lockdown Capability: Not Supported 00:28:33.545 Abort Command Limit: 4 00:28:33.545 Async Event Request Limit: 4 00:28:33.545 Number of Firmware Slots: N/A 00:28:33.545 Firmware Slot 1 Read-Only: N/A 00:28:33.545 Firmware Activation Without Reset: N/A 00:28:33.545 Multiple Update Detection Support: N/A 00:28:33.545 Firmware Update Granularity: No Information Provided 00:28:33.545 Per-Namespace SMART Log: No 00:28:33.545 Asymmetric Namespace Access Log Page: Not Supported 00:28:33.545 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:33.545 Command Effects Log Page: Supported 00:28:33.545 Get Log Page Extended Data: Supported 00:28:33.545 Telemetry Log Pages: Not Supported 00:28:33.545 Persistent Event Log Pages: Not Supported 00:28:33.545 Supported Log Pages Log Page: May Support 00:28:33.545 Commands Supported & Effects Log Page: Not Supported 00:28:33.545 Feature Identifiers & Effects Log Page:May Support 00:28:33.545 NVMe-MI Commands & Effects Log Page: May Support 00:28:33.545 Data Area 4 for Telemetry Log: Not Supported 00:28:33.545 Error Log Page Entries Supported: 128 00:28:33.545 Keep Alive: Supported 00:28:33.545 Keep Alive Granularity: 10000 ms 00:28:33.545 00:28:33.545 NVM Command Set Attributes 00:28:33.545 ========================== 00:28:33.545 Submission Queue Entry Size 00:28:33.545 Max: 64 00:28:33.546 Min: 64 00:28:33.546 Completion Queue Entry Size 00:28:33.546 Max: 16 00:28:33.546 Min: 16 00:28:33.546 Number of Namespaces: 32 00:28:33.546 Compare Command: Supported 00:28:33.546 Write Uncorrectable Command: Not Supported 00:28:33.546 Dataset Management Command: Supported 00:28:33.546 Write Zeroes Command: Supported 00:28:33.546 Set Features Save Field: Not Supported 00:28:33.546 Reservations: Supported 00:28:33.546 Timestamp: Not Supported 00:28:33.546 Copy: Supported 00:28:33.546 Volatile Write Cache: Present 00:28:33.546 Atomic Write Unit (Normal): 1 00:28:33.546 Atomic Write Unit (PFail): 1 00:28:33.546 Atomic Compare & Write Unit: 1 00:28:33.546 Fused Compare & Write: Supported 00:28:33.546 Scatter-Gather List 00:28:33.546 SGL Command Set: Supported 00:28:33.546 SGL Keyed: Supported 00:28:33.546 SGL Bit Bucket Descriptor: Not Supported 00:28:33.546 SGL Metadata Pointer: Not Supported 00:28:33.546 Oversized SGL: Not Supported 00:28:33.546 SGL Metadata Address: Not Supported 00:28:33.546 SGL Offset: Supported 00:28:33.546 Transport SGL Data Block: Not Supported 00:28:33.546 Replay Protected Memory Block: Not Supported 00:28:33.546 00:28:33.546 Firmware Slot Information 00:28:33.546 ========================= 00:28:33.546 Active slot: 1 00:28:33.546 Slot 1 Firmware Revision: 24.09 00:28:33.546 00:28:33.546 00:28:33.546 Commands Supported and Effects 00:28:33.546 ============================== 00:28:33.546 Admin Commands 00:28:33.546 -------------- 00:28:33.546 Get Log Page (02h): Supported 00:28:33.546 Identify (06h): Supported 00:28:33.546 Abort (08h): Supported 00:28:33.546 Set Features (09h): Supported 00:28:33.546 Get Features (0Ah): Supported 00:28:33.546 Asynchronous Event Request (0Ch): Supported 00:28:33.546 Keep Alive (18h): Supported 00:28:33.546 I/O Commands 00:28:33.546 ------------ 00:28:33.546 Flush (00h): Supported LBA-Change 00:28:33.546 Write (01h): Supported LBA-Change 00:28:33.546 Read (02h): Supported 00:28:33.546 Compare (05h): Supported 00:28:33.546 Write Zeroes (08h): Supported LBA-Change 00:28:33.546 Dataset Management (09h): Supported LBA-Change 00:28:33.546 Copy (19h): Supported LBA-Change 00:28:33.546 00:28:33.546 Error Log 00:28:33.546 ========= 00:28:33.546 00:28:33.546 Arbitration 00:28:33.546 =========== 00:28:33.546 Arbitration Burst: 1 00:28:33.546 00:28:33.546 Power Management 00:28:33.546 ================ 00:28:33.546 Number of Power States: 1 00:28:33.546 Current Power State: Power State #0 00:28:33.546 Power State #0: 00:28:33.546 Max Power: 0.00 W 00:28:33.546 Non-Operational State: Operational 00:28:33.546 Entry Latency: Not Reported 00:28:33.546 Exit Latency: Not Reported 00:28:33.546 Relative Read Throughput: 0 00:28:33.546 Relative Read Latency: 0 00:28:33.546 Relative Write Throughput: 0 00:28:33.546 Relative Write Latency: 0 00:28:33.546 Idle Power: Not Reported 00:28:33.546 Active Power: Not Reported 00:28:33.546 Non-Operational Permissive Mode: Not Supported 00:28:33.546 00:28:33.546 Health Information 00:28:33.546 ================== 00:28:33.546 Critical Warnings: 00:28:33.546 Available Spare Space: OK 00:28:33.546 Temperature: OK 00:28:33.546 Device Reliability: OK 00:28:33.546 Read Only: No 00:28:33.546 Volatile Memory Backup: OK 00:28:33.546 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:33.546 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:33.546 Available Spare: 0% 00:28:33.546 Available Spare Threshold: 0% 00:28:33.546 Life Percentage Used:[2024-07-14 01:14:22.913388] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.546 [2024-07-14 01:14:22.913399] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7eaae0) 00:28:33.546 [2024-07-14 01:14:22.913410] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.546 [2024-07-14 01:14:22.913433] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x841cc0, cid 7, qid 0 00:28:33.546 [2024-07-14 01:14:22.913635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.546 [2024-07-14 01:14:22.913651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.546 [2024-07-14 01:14:22.913658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.546 [2024-07-14 01:14:22.913665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841cc0) on tqpair=0x7eaae0 00:28:33.546 [2024-07-14 01:14:22.913710] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:33.546 [2024-07-14 01:14:22.913730] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841240) on tqpair=0x7eaae0 00:28:33.546 [2024-07-14 01:14:22.913740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.546 [2024-07-14 01:14:22.913749] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8413c0) on tqpair=0x7eaae0 00:28:33.546 [2024-07-14 01:14:22.913756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.546 [2024-07-14 01:14:22.913764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x841540) on tqpair=0x7eaae0 00:28:33.546 [2024-07-14 01:14:22.913772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.546 [2024-07-14 01:14:22.913780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8416c0) on tqpair=0x7eaae0 00:28:33.546 [2024-07-14 01:14:22.913787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.546 [2024-07-14 01:14:22.913800] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.546 [2024-07-14 01:14:22.913808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.546 [2024-07-14 01:14:22.913814] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eaae0) 00:28:33.546 [2024-07-14 01:14:22.913824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.546 [2024-07-14 01:14:22.913846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8416c0, cid 3, qid 0 00:28:33.546 [2024-07-14 01:14:22.914006] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.546 [2024-07-14 01:14:22.914021] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.546 [2024-07-14 01:14:22.914028] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.546 [2024-07-14 01:14:22.914035] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8416c0) on tqpair=0x7eaae0 00:28:33.546 [2024-07-14 01:14:22.914050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.546 [2024-07-14 01:14:22.914059] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.546 [2024-07-14 01:14:22.914065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eaae0) 00:28:33.546 [2024-07-14 01:14:22.914076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.546 [2024-07-14 01:14:22.914102] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8416c0, cid 3, qid 0 00:28:33.546 [2024-07-14 01:14:22.914258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.546 [2024-07-14 01:14:22.914273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.546 [2024-07-14 01:14:22.914280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.546 [2024-07-14 01:14:22.914287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8416c0) on tqpair=0x7eaae0 00:28:33.546 [2024-07-14 01:14:22.914295] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:33.546 [2024-07-14 01:14:22.914303] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:33.546 [2024-07-14 01:14:22.914319] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.546 [2024-07-14 01:14:22.914328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.546 [2024-07-14 01:14:22.914335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eaae0) 00:28:33.546 [2024-07-14 01:14:22.914345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.546 [2024-07-14 01:14:22.914365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8416c0, cid 3, qid 0 00:28:33.546 [2024-07-14 01:14:22.914642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.546 [2024-07-14 01:14:22.914654] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.546 [2024-07-14 01:14:22.914661] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.546 [2024-07-14 01:14:22.914668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8416c0) on tqpair=0x7eaae0 00:28:33.546 [2024-07-14 01:14:22.914684] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.546 [2024-07-14 01:14:22.914693] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.546 [2024-07-14 01:14:22.914700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eaae0) 00:28:33.546 [2024-07-14 01:14:22.914710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.546 [2024-07-14 01:14:22.914729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8416c0, cid 3, qid 0 00:28:33.546 [2024-07-14 01:14:22.914876] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.546 [2024-07-14 01:14:22.914892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.546 [2024-07-14 01:14:22.914899] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.546 [2024-07-14 01:14:22.914905] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8416c0) on tqpair=0x7eaae0 00:28:33.546 [2024-07-14 01:14:22.914922] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.546 [2024-07-14 01:14:22.914931] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.546 [2024-07-14 01:14:22.914938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eaae0) 00:28:33.546 [2024-07-14 01:14:22.914948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.546 [2024-07-14 01:14:22.914969] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8416c0, cid 3, qid 0 00:28:33.546 [2024-07-14 01:14:22.915125] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.546 [2024-07-14 01:14:22.915138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.546 [2024-07-14 01:14:22.915148] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.546 [2024-07-14 01:14:22.915155] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8416c0) on tqpair=0x7eaae0 00:28:33.546 [2024-07-14 01:14:22.915171] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.546 [2024-07-14 01:14:22.915180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.915186] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eaae0) 00:28:33.547 [2024-07-14 01:14:22.915196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.547 [2024-07-14 01:14:22.915216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8416c0, cid 3, qid 0 00:28:33.547 [2024-07-14 01:14:22.915372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.547 [2024-07-14 01:14:22.915387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.547 [2024-07-14 01:14:22.915394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.915401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8416c0) on tqpair=0x7eaae0 00:28:33.547 [2024-07-14 01:14:22.915417] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.915426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.915433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eaae0) 00:28:33.547 [2024-07-14 01:14:22.915443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.547 [2024-07-14 01:14:22.915463] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8416c0, cid 3, qid 0 00:28:33.547 [2024-07-14 01:14:22.915619] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.547 [2024-07-14 01:14:22.915631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.547 [2024-07-14 01:14:22.915637] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.915644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8416c0) on tqpair=0x7eaae0 00:28:33.547 [2024-07-14 01:14:22.915659] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.915668] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.915675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eaae0) 00:28:33.547 [2024-07-14 01:14:22.915685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.547 [2024-07-14 01:14:22.915705] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8416c0, cid 3, qid 0 00:28:33.547 [2024-07-14 01:14:22.915843] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.547 [2024-07-14 01:14:22.915855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.547 [2024-07-14 01:14:22.915862] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.915880] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8416c0) on tqpair=0x7eaae0 00:28:33.547 [2024-07-14 01:14:22.915897] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.915906] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.915913] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eaae0) 00:28:33.547 [2024-07-14 01:14:22.915924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.547 [2024-07-14 01:14:22.915944] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8416c0, cid 3, qid 0 00:28:33.547 [2024-07-14 01:14:22.916099] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.547 [2024-07-14 01:14:22.916111] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.547 [2024-07-14 01:14:22.916118] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.916128] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8416c0) on tqpair=0x7eaae0 00:28:33.547 [2024-07-14 01:14:22.916144] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.916153] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.916159] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eaae0) 00:28:33.547 [2024-07-14 01:14:22.916170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.547 [2024-07-14 01:14:22.916190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8416c0, cid 3, qid 0 00:28:33.547 [2024-07-14 01:14:22.916336] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.547 [2024-07-14 01:14:22.916351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.547 [2024-07-14 01:14:22.916357] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.916364] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8416c0) on tqpair=0x7eaae0 00:28:33.547 [2024-07-14 01:14:22.916381] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.916390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.916397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eaae0) 00:28:33.547 [2024-07-14 01:14:22.916407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.547 [2024-07-14 01:14:22.916427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8416c0, cid 3, qid 0 00:28:33.547 [2024-07-14 01:14:22.916566] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.547 [2024-07-14 01:14:22.916581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.547 [2024-07-14 01:14:22.916588] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.916594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8416c0) on tqpair=0x7eaae0 00:28:33.547 [2024-07-14 01:14:22.916611] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.916620] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.916626] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eaae0) 00:28:33.547 [2024-07-14 01:14:22.916637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.547 [2024-07-14 01:14:22.916657] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8416c0, cid 3, qid 0 00:28:33.547 [2024-07-14 01:14:22.916792] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.547 [2024-07-14 01:14:22.916808] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.547 [2024-07-14 01:14:22.916814] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.916821] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8416c0) on tqpair=0x7eaae0 00:28:33.547 [2024-07-14 01:14:22.916838] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.916847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.916854] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7eaae0) 00:28:33.547 [2024-07-14 01:14:22.916864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.547 [2024-07-14 01:14:22.920903] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8416c0, cid 3, qid 0 00:28:33.547 [2024-07-14 01:14:22.921084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:33.547 [2024-07-14 01:14:22.921096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:33.547 [2024-07-14 01:14:22.921103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:33.547 [2024-07-14 01:14:22.921110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8416c0) on tqpair=0x7eaae0 00:28:33.547 [2024-07-14 01:14:22.921127] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:28:33.547 0% 00:28:33.547 Data Units Read: 0 00:28:33.547 Data Units Written: 0 00:28:33.547 Host Read Commands: 0 00:28:33.547 Host Write Commands: 0 00:28:33.547 Controller Busy Time: 0 minutes 00:28:33.547 Power Cycles: 0 00:28:33.547 Power On Hours: 0 hours 00:28:33.547 Unsafe Shutdowns: 0 00:28:33.547 Unrecoverable Media Errors: 0 00:28:33.547 Lifetime Error Log Entries: 0 00:28:33.547 Warning Temperature Time: 0 minutes 00:28:33.547 Critical Temperature Time: 0 minutes 00:28:33.547 00:28:33.547 Number of Queues 00:28:33.547 ================ 00:28:33.547 Number of I/O Submission Queues: 127 00:28:33.547 Number of I/O Completion Queues: 127 00:28:33.547 00:28:33.547 Active Namespaces 00:28:33.547 ================= 00:28:33.547 Namespace ID:1 00:28:33.547 Error Recovery Timeout: Unlimited 00:28:33.547 Command Set Identifier: NVM (00h) 00:28:33.547 Deallocate: Supported 00:28:33.547 Deallocated/Unwritten Error: Not Supported 00:28:33.547 Deallocated Read Value: Unknown 00:28:33.547 Deallocate in Write Zeroes: Not Supported 00:28:33.547 Deallocated Guard Field: 0xFFFF 00:28:33.547 Flush: Supported 00:28:33.547 Reservation: Supported 00:28:33.547 Namespace Sharing Capabilities: Multiple Controllers 00:28:33.547 Size (in LBAs): 131072 (0GiB) 00:28:33.547 Capacity (in LBAs): 131072 (0GiB) 00:28:33.547 Utilization (in LBAs): 131072 (0GiB) 00:28:33.547 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:33.547 EUI64: ABCDEF0123456789 00:28:33.547 UUID: 4754bc0e-5348-44e9-a4db-407ecb58f8f2 00:28:33.547 Thin Provisioning: Not Supported 00:28:33.547 Per-NS Atomic Units: Yes 00:28:33.547 Atomic Boundary Size (Normal): 0 00:28:33.547 Atomic Boundary Size (PFail): 0 00:28:33.547 Atomic Boundary Offset: 0 00:28:33.547 Maximum Single Source Range Length: 65535 00:28:33.547 Maximum Copy Length: 65535 00:28:33.547 Maximum Source Range Count: 1 00:28:33.547 NGUID/EUI64 Never Reused: No 00:28:33.547 Namespace Write Protected: No 00:28:33.547 Number of LBA Formats: 1 00:28:33.547 Current LBA Format: LBA Format #00 00:28:33.547 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:33.547 00:28:33.547 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:33.547 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:33.547 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.547 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:33.547 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.547 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:33.547 01:14:22 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:33.547 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:33.547 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:33.547 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:33.547 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:33.547 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:33.547 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:33.808 rmmod nvme_tcp 00:28:33.808 rmmod nvme_fabrics 00:28:33.808 rmmod nvme_keyring 00:28:33.808 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:33.808 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:33.808 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:33.808 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1237902 ']' 00:28:33.808 01:14:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1237902 00:28:33.808 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1237902 ']' 00:28:33.808 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1237902 00:28:33.808 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:28:33.808 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:33.808 01:14:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1237902 00:28:33.808 01:14:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:33.808 01:14:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:33.808 01:14:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1237902' 00:28:33.808 killing process with pid 1237902 00:28:33.808 01:14:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1237902 00:28:33.808 01:14:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1237902 00:28:34.073 01:14:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:34.073 01:14:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:34.073 01:14:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:34.073 01:14:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:34.073 01:14:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:34.073 01:14:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.073 01:14:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:34.073 01:14:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.979 01:14:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:35.979 00:28:35.979 real 0m5.421s 00:28:35.979 user 0m4.557s 00:28:35.979 sys 0m1.865s 00:28:35.979 01:14:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:35.979 01:14:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:35.979 ************************************ 00:28:35.979 END TEST nvmf_identify 00:28:35.979 ************************************ 00:28:35.979 01:14:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:35.979 01:14:25 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:35.979 01:14:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:35.979 01:14:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:35.979 01:14:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:35.979 ************************************ 00:28:35.979 START TEST nvmf_perf 00:28:35.979 ************************************ 00:28:35.979 01:14:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:36.238 * Looking for test storage... 00:28:36.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:36.238 01:14:25 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:36.238 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:36.238 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:36.238 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:36.238 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:36.238 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:36.238 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:36.239 01:14:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:38.143 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:38.144 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:38.144 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:38.144 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:38.144 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:38.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:38.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:28:38.144 00:28:38.144 --- 10.0.0.2 ping statistics --- 00:28:38.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.144 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:38.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:38.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:28:38.144 00:28:38.144 --- 10.0.0.1 ping statistics --- 00:28:38.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.144 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1239965 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1239965 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1239965 ']' 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:38.144 01:14:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:38.144 [2024-07-14 01:14:27.545109] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:38.144 [2024-07-14 01:14:27.545218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.404 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.404 [2024-07-14 01:14:27.613516] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:38.404 [2024-07-14 01:14:27.705996] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.404 [2024-07-14 01:14:27.706060] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.404 [2024-07-14 01:14:27.706077] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.404 [2024-07-14 01:14:27.706091] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.404 [2024-07-14 01:14:27.706103] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.404 [2024-07-14 01:14:27.706195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.404 [2024-07-14 01:14:27.706249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:38.404 [2024-07-14 01:14:27.706366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:38.404 [2024-07-14 01:14:27.706368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.664 01:14:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:38.664 01:14:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:28:38.664 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:38.664 01:14:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:38.664 01:14:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:38.664 01:14:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.664 01:14:27 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:38.664 01:14:27 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:42.001 01:14:30 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:42.001 01:14:30 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:42.001 01:14:31 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:42.001 01:14:31 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:42.260 01:14:31 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:42.260 01:14:31 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:42.260 01:14:31 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:42.260 01:14:31 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:42.260 01:14:31 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:42.519 [2024-07-14 01:14:31.726734] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.519 01:14:31 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:42.777 01:14:32 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:42.777 01:14:32 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:43.036 01:14:32 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:43.036 01:14:32 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:43.293 01:14:32 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:43.551 [2024-07-14 01:14:32.834841] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.551 01:14:32 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:43.810 01:14:33 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:43.810 01:14:33 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:43.810 01:14:33 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:43.810 01:14:33 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:45.187 Initializing NVMe Controllers 00:28:45.187 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:45.187 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:45.187 Initialization complete. Launching workers. 00:28:45.187 ======================================================== 00:28:45.187 Latency(us) 00:28:45.187 Device Information : IOPS MiB/s Average min max 00:28:45.187 PCIE (0000:88:00.0) NSID 1 from core 0: 85718.53 334.84 372.66 28.17 4304.87 00:28:45.187 ======================================================== 00:28:45.187 Total : 85718.53 334.84 372.66 28.17 4304.87 00:28:45.187 00:28:45.187 01:14:34 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:45.187 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.587 Initializing NVMe Controllers 00:28:46.587 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:46.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:46.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:46.587 Initialization complete. Launching workers. 00:28:46.587 ======================================================== 00:28:46.587 Latency(us) 00:28:46.587 Device Information : IOPS MiB/s Average min max 00:28:46.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 97.00 0.38 10624.05 258.67 46075.59 00:28:46.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 71.00 0.28 14161.69 6981.54 49912.01 00:28:46.587 ======================================================== 00:28:46.587 Total : 168.00 0.66 12119.12 258.67 49912.01 00:28:46.587 00:28:46.587 01:14:35 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:46.587 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.964 Initializing NVMe Controllers 00:28:47.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:47.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:47.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:47.964 Initialization complete. Launching workers. 00:28:47.964 ======================================================== 00:28:47.964 Latency(us) 00:28:47.964 Device Information : IOPS MiB/s Average min max 00:28:47.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8390.89 32.78 3814.42 564.03 7564.21 00:28:47.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3877.56 15.15 8281.14 6409.07 16037.49 00:28:47.964 ======================================================== 00:28:47.964 Total : 12268.44 47.92 5226.17 564.03 16037.49 00:28:47.964 00:28:47.964 01:14:36 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:47.964 01:14:36 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:47.964 01:14:36 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:47.964 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.500 Initializing NVMe Controllers 00:28:50.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:50.500 Controller IO queue size 128, less than required. 00:28:50.500 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:50.500 Controller IO queue size 128, less than required. 00:28:50.500 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:50.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:50.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:50.500 Initialization complete. Launching workers. 00:28:50.500 ======================================================== 00:28:50.500 Latency(us) 00:28:50.500 Device Information : IOPS MiB/s Average min max 00:28:50.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 680.00 170.00 197447.57 103335.80 265473.14 00:28:50.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 574.50 143.62 230896.25 70453.58 348309.52 00:28:50.500 ======================================================== 00:28:50.500 Total : 1254.50 313.62 212765.44 70453.58 348309.52 00:28:50.500 00:28:50.500 01:14:39 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:50.500 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.500 No valid NVMe controllers or AIO or URING devices found 00:28:50.500 Initializing NVMe Controllers 00:28:50.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:50.500 Controller IO queue size 128, less than required. 00:28:50.500 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:50.500 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:50.500 Controller IO queue size 128, less than required. 00:28:50.500 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:50.500 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:50.500 WARNING: Some requested NVMe devices were skipped 00:28:50.500 01:14:39 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:50.500 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.034 Initializing NVMe Controllers 00:28:53.034 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:53.034 Controller IO queue size 128, less than required. 00:28:53.034 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.034 Controller IO queue size 128, less than required. 00:28:53.034 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:53.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:53.034 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:53.034 Initialization complete. Launching workers. 00:28:53.034 00:28:53.034 ==================== 00:28:53.034 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:53.034 TCP transport: 00:28:53.034 polls: 30550 00:28:53.034 idle_polls: 8923 00:28:53.034 sock_completions: 21627 00:28:53.034 nvme_completions: 3809 00:28:53.034 submitted_requests: 5688 00:28:53.034 queued_requests: 1 00:28:53.034 00:28:53.034 ==================== 00:28:53.034 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:53.034 TCP transport: 00:28:53.034 polls: 33407 00:28:53.034 idle_polls: 12558 00:28:53.034 sock_completions: 20849 00:28:53.034 nvme_completions: 3837 00:28:53.034 submitted_requests: 5778 00:28:53.034 queued_requests: 1 00:28:53.034 ======================================================== 00:28:53.034 Latency(us) 00:28:53.034 Device Information : IOPS MiB/s Average min max 00:28:53.034 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 951.98 237.99 139261.73 86142.60 224133.61 00:28:53.034 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 958.98 239.74 136016.77 55417.05 191723.72 00:28:53.034 ======================================================== 00:28:53.034 Total : 1910.95 477.74 137633.31 55417.05 224133.61 00:28:53.034 00:28:53.034 01:14:42 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:53.034 01:14:42 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:53.292 01:14:42 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:53.292 01:14:42 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:53.292 01:14:42 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:56.576 01:14:45 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=0d0ea591-55d3-454a-ae84-47d4c591e6ee 00:28:56.576 01:14:45 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 0d0ea591-55d3-454a-ae84-47d4c591e6ee 00:28:56.576 01:14:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=0d0ea591-55d3-454a-ae84-47d4c591e6ee 00:28:56.576 01:14:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:56.576 01:14:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:56.576 01:14:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:56.576 01:14:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:56.834 01:14:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:56.834 { 00:28:56.834 "uuid": "0d0ea591-55d3-454a-ae84-47d4c591e6ee", 00:28:56.834 "name": "lvs_0", 00:28:56.834 "base_bdev": "Nvme0n1", 00:28:56.834 "total_data_clusters": 238234, 00:28:56.834 "free_clusters": 238234, 00:28:56.834 "block_size": 512, 00:28:56.834 "cluster_size": 4194304 00:28:56.834 } 00:28:56.834 ]' 00:28:56.834 01:14:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="0d0ea591-55d3-454a-ae84-47d4c591e6ee") .free_clusters' 00:28:56.834 01:14:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:28:56.834 01:14:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="0d0ea591-55d3-454a-ae84-47d4c591e6ee") .cluster_size' 00:28:56.834 01:14:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:56.834 01:14:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:28:56.834 01:14:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:28:56.834 952936 00:28:56.834 01:14:46 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:56.834 01:14:46 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:56.834 01:14:46 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0d0ea591-55d3-454a-ae84-47d4c591e6ee lbd_0 20480 00:28:57.398 01:14:46 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=fcdcd211-b7dc-4195-89d0-32e193846e32 00:28:57.398 01:14:46 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore fcdcd211-b7dc-4195-89d0-32e193846e32 lvs_n_0 00:28:58.354 01:14:47 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=5fe5343a-43ca-47cb-816b-852aec92a203 00:28:58.354 01:14:47 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 5fe5343a-43ca-47cb-816b-852aec92a203 00:28:58.354 01:14:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=5fe5343a-43ca-47cb-816b-852aec92a203 00:28:58.354 01:14:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:58.354 01:14:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:58.354 01:14:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:58.354 01:14:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:58.354 01:14:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:58.354 { 00:28:58.354 "uuid": "0d0ea591-55d3-454a-ae84-47d4c591e6ee", 00:28:58.354 "name": "lvs_0", 00:28:58.354 "base_bdev": "Nvme0n1", 00:28:58.354 "total_data_clusters": 238234, 00:28:58.354 "free_clusters": 233114, 00:28:58.354 "block_size": 512, 00:28:58.354 "cluster_size": 4194304 00:28:58.354 }, 00:28:58.354 { 00:28:58.354 "uuid": "5fe5343a-43ca-47cb-816b-852aec92a203", 00:28:58.354 "name": "lvs_n_0", 00:28:58.354 "base_bdev": "fcdcd211-b7dc-4195-89d0-32e193846e32", 00:28:58.354 "total_data_clusters": 5114, 00:28:58.354 "free_clusters": 5114, 00:28:58.354 "block_size": 512, 00:28:58.354 "cluster_size": 4194304 00:28:58.354 } 00:28:58.354 ]' 00:28:58.354 01:14:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="5fe5343a-43ca-47cb-816b-852aec92a203") .free_clusters' 00:28:58.612 01:14:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:28:58.612 01:14:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="5fe5343a-43ca-47cb-816b-852aec92a203") .cluster_size' 00:28:58.612 01:14:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:58.612 01:14:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:28:58.612 01:14:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:28:58.612 20456 00:28:58.612 01:14:47 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:58.612 01:14:47 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5fe5343a-43ca-47cb-816b-852aec92a203 lbd_nest_0 20456 00:28:58.869 01:14:48 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=fa9b1d48-131a-4524-b682-81c80a4b06ee 00:28:58.869 01:14:48 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:59.127 01:14:48 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:59.127 01:14:48 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 fa9b1d48-131a-4524-b682-81c80a4b06ee 00:28:59.384 01:14:48 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:59.640 01:14:48 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:59.640 01:14:48 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:59.640 01:14:48 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:59.640 01:14:48 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:59.640 01:14:48 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:59.640 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.846 Initializing NVMe Controllers 00:29:11.846 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:11.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:11.846 Initialization complete. Launching workers. 00:29:11.846 ======================================================== 00:29:11.846 Latency(us) 00:29:11.846 Device Information : IOPS MiB/s Average min max 00:29:11.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 51.19 0.02 19580.38 234.76 45736.30 00:29:11.846 ======================================================== 00:29:11.846 Total : 51.19 0.02 19580.38 234.76 45736.30 00:29:11.846 00:29:11.846 01:14:59 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:11.846 01:14:59 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:11.846 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.955 Initializing NVMe Controllers 00:29:19.955 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:19.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:19.955 Initialization complete. Launching workers. 00:29:19.955 ======================================================== 00:29:19.955 Latency(us) 00:29:19.955 Device Information : IOPS MiB/s Average min max 00:29:19.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.48 10.06 12424.92 4977.66 50874.08 00:29:19.955 ======================================================== 00:29:19.955 Total : 80.48 10.06 12424.92 4977.66 50874.08 00:29:19.955 00:29:19.955 01:15:09 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:19.955 01:15:09 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:19.955 01:15:09 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:20.215 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.193 Initializing NVMe Controllers 00:29:30.193 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:30.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:30.193 Initialization complete. Launching workers. 00:29:30.193 ======================================================== 00:29:30.193 Latency(us) 00:29:30.193 Device Information : IOPS MiB/s Average min max 00:29:30.193 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7050.60 3.44 4540.26 306.08 11034.38 00:29:30.193 ======================================================== 00:29:30.193 Total : 7050.60 3.44 4540.26 306.08 11034.38 00:29:30.193 00:29:30.193 01:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:30.193 01:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:30.193 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.446 Initializing NVMe Controllers 00:29:42.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:42.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:42.446 Initialization complete. Launching workers. 00:29:42.446 ======================================================== 00:29:42.446 Latency(us) 00:29:42.446 Device Information : IOPS MiB/s Average min max 00:29:42.446 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1773.09 221.64 18058.32 1352.82 37551.05 00:29:42.446 ======================================================== 00:29:42.446 Total : 1773.09 221.64 18058.32 1352.82 37551.05 00:29:42.446 00:29:42.446 01:15:29 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:42.446 01:15:29 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:42.446 01:15:29 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:42.446 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.423 Initializing NVMe Controllers 00:29:52.423 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:52.423 Controller IO queue size 128, less than required. 00:29:52.423 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:52.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:52.423 Initialization complete. Launching workers. 00:29:52.423 ======================================================== 00:29:52.423 Latency(us) 00:29:52.423 Device Information : IOPS MiB/s Average min max 00:29:52.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11899.68 5.81 10759.03 1659.80 25176.67 00:29:52.423 ======================================================== 00:29:52.423 Total : 11899.68 5.81 10759.03 1659.80 25176.67 00:29:52.423 00:29:52.423 01:15:40 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:52.423 01:15:40 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:52.423 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.396 Initializing NVMe Controllers 00:30:02.396 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:02.396 Controller IO queue size 128, less than required. 00:30:02.396 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:02.396 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:02.396 Initialization complete. Launching workers. 00:30:02.396 ======================================================== 00:30:02.396 Latency(us) 00:30:02.396 Device Information : IOPS MiB/s Average min max 00:30:02.396 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1200.51 150.06 106883.95 32009.91 183070.32 00:30:02.396 ======================================================== 00:30:02.396 Total : 1200.51 150.06 106883.95 32009.91 183070.32 00:30:02.396 00:30:02.396 01:15:50 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:02.396 01:15:50 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fa9b1d48-131a-4524-b682-81c80a4b06ee 00:30:02.396 01:15:51 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:02.654 01:15:51 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fcdcd211-b7dc-4195-89d0-32e193846e32 00:30:02.912 01:15:52 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:03.170 01:15:52 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:03.170 01:15:52 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:03.170 01:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:03.170 01:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:30:03.170 01:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:03.170 01:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:30:03.170 01:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:03.170 01:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:03.170 rmmod nvme_tcp 00:30:03.170 rmmod nvme_fabrics 00:30:03.170 rmmod nvme_keyring 00:30:03.170 01:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:03.170 01:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:30:03.170 01:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:30:03.170 01:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1239965 ']' 00:30:03.170 01:15:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1239965 00:30:03.170 01:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1239965 ']' 00:30:03.170 01:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1239965 00:30:03.170 01:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:30:03.430 01:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:03.430 01:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1239965 00:30:03.430 01:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:03.430 01:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:03.430 01:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1239965' 00:30:03.430 killing process with pid 1239965 00:30:03.430 01:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1239965 00:30:03.430 01:15:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1239965 00:30:04.810 01:15:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:04.810 01:15:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:04.810 01:15:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:04.810 01:15:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:04.810 01:15:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:04.810 01:15:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.810 01:15:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:04.810 01:15:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.349 01:15:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:07.349 00:30:07.349 real 1m30.871s 00:30:07.349 user 5m28.564s 00:30:07.349 sys 0m15.797s 00:30:07.349 01:15:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:07.349 01:15:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:07.349 ************************************ 00:30:07.349 END TEST nvmf_perf 00:30:07.349 ************************************ 00:30:07.349 01:15:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:07.349 01:15:56 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:07.349 01:15:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:07.349 01:15:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:07.349 01:15:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:07.349 ************************************ 00:30:07.349 START TEST nvmf_fio_host 00:30:07.349 ************************************ 00:30:07.349 01:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:07.349 * Looking for test storage... 00:30:07.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:07.349 01:15:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.349 01:15:56 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.349 01:15:56 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.349 01:15:56 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.349 01:15:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.349 01:15:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.349 01:15:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.349 01:15:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:07.349 01:15:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.349 01:15:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.349 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:07.350 01:15:56 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:09.253 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:09.253 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:09.253 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:09.253 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:09.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:09.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:30:09.253 00:30:09.253 --- 10.0.0.2 ping statistics --- 00:30:09.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.253 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:09.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:09.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:30:09.253 00:30:09.253 --- 10.0.0.1 ping statistics --- 00:30:09.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.253 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:09.253 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:09.254 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:09.254 01:15:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:09.254 01:15:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:09.254 01:15:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:09.254 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:09.254 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.254 01:15:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1252548 00:30:09.254 01:15:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:09.254 01:15:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:09.254 01:15:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1252548 00:30:09.254 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1252548 ']' 00:30:09.254 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.254 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:09.254 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.254 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:09.254 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.254 [2024-07-14 01:15:58.429487] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:30:09.254 [2024-07-14 01:15:58.429565] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.254 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.254 [2024-07-14 01:15:58.495268] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:09.254 [2024-07-14 01:15:58.586382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.254 [2024-07-14 01:15:58.586443] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.254 [2024-07-14 01:15:58.586460] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:09.254 [2024-07-14 01:15:58.586473] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:09.254 [2024-07-14 01:15:58.586484] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.254 [2024-07-14 01:15:58.586568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.254 [2024-07-14 01:15:58.586635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:09.254 [2024-07-14 01:15:58.586730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:09.254 [2024-07-14 01:15:58.586732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.546 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:09.546 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:30:09.547 01:15:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:09.547 [2024-07-14 01:15:58.919272] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.805 01:15:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:09.805 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:09.805 01:15:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.805 01:15:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:09.805 Malloc1 00:30:10.064 01:15:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:10.322 01:15:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:10.322 01:15:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:10.580 [2024-07-14 01:15:59.954461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.580 01:15:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:10.839 01:16:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:11.097 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:11.097 fio-3.35 00:30:11.097 Starting 1 thread 00:30:11.097 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.627 00:30:13.627 test: (groupid=0, jobs=1): err= 0: pid=1252904: Sun Jul 14 01:16:02 2024 00:30:13.627 read: IOPS=8777, BW=34.3MiB/s (36.0MB/s)(68.8MiB/2007msec) 00:30:13.627 slat (nsec): min=1981, max=162338, avg=2665.36, stdev=1991.40 00:30:13.627 clat (usec): min=3182, max=14231, avg=8038.20, stdev=625.41 00:30:13.627 lat (usec): min=3213, max=14234, avg=8040.86, stdev=625.30 00:30:13.627 clat percentiles (usec): 00:30:13.627 | 1.00th=[ 6718], 5.00th=[ 7111], 10.00th=[ 7308], 20.00th=[ 7504], 00:30:13.627 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8160], 00:30:13.627 | 70.00th=[ 8356], 80.00th=[ 8455], 90.00th=[ 8848], 95.00th=[ 8979], 00:30:13.627 | 99.00th=[ 9503], 99.50th=[ 9634], 99.90th=[10945], 99.95th=[13304], 00:30:13.627 | 99.99th=[13698] 00:30:13.627 bw ( KiB/s): min=34816, max=35584, per=100.00%, avg=35114.00, stdev=338.87, samples=4 00:30:13.627 iops : min= 8704, max= 8896, avg=8778.50, stdev=84.72, samples=4 00:30:13.627 write: IOPS=8788, BW=34.3MiB/s (36.0MB/s)(68.9MiB/2007msec); 0 zone resets 00:30:13.627 slat (usec): min=2, max=126, avg= 2.78, stdev= 1.41 00:30:13.627 clat (usec): min=1429, max=12832, avg=6427.76, stdev=558.21 00:30:13.627 lat (usec): min=1438, max=12835, avg=6430.54, stdev=558.15 00:30:13.627 clat percentiles (usec): 00:30:13.627 | 1.00th=[ 5276], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 5997], 00:30:13.627 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6521], 00:30:13.627 | 70.00th=[ 6652], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:30:13.627 | 99.00th=[ 7701], 99.50th=[ 7898], 99.90th=[10945], 99.95th=[11994], 00:30:13.627 | 99.99th=[12780] 00:30:13.627 bw ( KiB/s): min=34176, max=35712, per=99.97%, avg=35142.00, stdev=720.40, samples=4 00:30:13.627 iops : min= 8544, max= 8928, avg=8785.50, stdev=180.10, samples=4 00:30:13.627 lat (msec) : 2=0.01%, 4=0.09%, 10=99.70%, 20=0.20% 00:30:13.627 cpu : usr=53.94%, sys=38.63%, ctx=72, majf=0, minf=32 00:30:13.627 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:13.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:13.627 issued rwts: total=17617,17638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.627 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:13.627 00:30:13.627 Run status group 0 (all jobs): 00:30:13.627 READ: bw=34.3MiB/s (36.0MB/s), 34.3MiB/s-34.3MiB/s (36.0MB/s-36.0MB/s), io=68.8MiB (72.2MB), run=2007-2007msec 00:30:13.627 WRITE: bw=34.3MiB/s (36.0MB/s), 34.3MiB/s-34.3MiB/s (36.0MB/s-36.0MB/s), io=68.9MiB (72.2MB), run=2007-2007msec 00:30:13.627 01:16:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:13.627 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:13.627 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:13.627 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:13.627 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:13.627 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:13.627 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:13.627 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:13.627 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:13.627 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:13.627 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:13.627 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:13.627 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:13.627 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:13.628 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:13.628 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:13.628 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:13.628 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:13.628 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:13.628 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:13.628 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:13.628 01:16:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:13.628 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:13.628 fio-3.35 00:30:13.628 Starting 1 thread 00:30:13.628 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.156 00:30:16.156 test: (groupid=0, jobs=1): err= 0: pid=1253257: Sun Jul 14 01:16:05 2024 00:30:16.156 read: IOPS=7940, BW=124MiB/s (130MB/s)(249MiB/2007msec) 00:30:16.156 slat (usec): min=2, max=116, avg= 3.95, stdev= 2.15 00:30:16.156 clat (usec): min=3167, max=17808, avg=9730.83, stdev=2455.90 00:30:16.156 lat (usec): min=3171, max=17811, avg=9734.78, stdev=2455.97 00:30:16.156 clat percentiles (usec): 00:30:16.156 | 1.00th=[ 4948], 5.00th=[ 5866], 10.00th=[ 6587], 20.00th=[ 7504], 00:30:16.156 | 30.00th=[ 8291], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10290], 00:30:16.156 | 70.00th=[11076], 80.00th=[11863], 90.00th=[12911], 95.00th=[13829], 00:30:16.156 | 99.00th=[15926], 99.50th=[16450], 99.90th=[17433], 99.95th=[17433], 00:30:16.156 | 99.99th=[17695] 00:30:16.156 bw ( KiB/s): min=59712, max=69280, per=51.42%, avg=65320.00, stdev=4021.10, samples=4 00:30:16.156 iops : min= 3732, max= 4330, avg=4082.50, stdev=251.32, samples=4 00:30:16.156 write: IOPS=4531, BW=70.8MiB/s (74.2MB/s)(133MiB/1876msec); 0 zone resets 00:30:16.156 slat (usec): min=30, max=194, avg=35.30, stdev= 6.98 00:30:16.156 clat (usec): min=4341, max=18492, avg=11159.18, stdev=1883.15 00:30:16.156 lat (usec): min=4372, max=18527, avg=11194.48, stdev=1883.38 00:30:16.156 clat percentiles (usec): 00:30:16.156 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9503], 00:30:16.156 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11469], 00:30:16.156 | 70.00th=[11994], 80.00th=[12649], 90.00th=[13829], 95.00th=[14615], 00:30:16.156 | 99.00th=[16188], 99.50th=[16581], 99.90th=[17171], 99.95th=[17171], 00:30:16.156 | 99.99th=[18482] 00:30:16.156 bw ( KiB/s): min=62336, max=72000, per=93.45%, avg=67752.00, stdev=4146.80, samples=4 00:30:16.156 iops : min= 3896, max= 4500, avg=4234.50, stdev=259.17, samples=4 00:30:16.156 lat (msec) : 4=0.09%, 10=47.01%, 20=52.90% 00:30:16.156 cpu : usr=73.33%, sys=23.03%, ctx=28, majf=0, minf=48 00:30:16.156 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:30:16.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:16.156 issued rwts: total=15936,8501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.156 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:16.156 00:30:16.156 Run status group 0 (all jobs): 00:30:16.156 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=249MiB (261MB), run=2007-2007msec 00:30:16.156 WRITE: bw=70.8MiB/s (74.2MB/s), 70.8MiB/s-70.8MiB/s (74.2MB/s-74.2MB/s), io=133MiB (139MB), run=1876-1876msec 00:30:16.156 01:16:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:16.414 01:16:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:16.414 01:16:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:16.414 01:16:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:16.414 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:16.414 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:30:16.414 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:16.414 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:16.414 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:16.414 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:16.414 01:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:30:16.414 01:16:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:19.705 Nvme0n1 00:30:19.705 01:16:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:22.992 01:16:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=13465677-19d3-430d-b23f-790b2dfa1018 00:30:22.992 01:16:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 13465677-19d3-430d-b23f-790b2dfa1018 00:30:22.992 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=13465677-19d3-430d-b23f-790b2dfa1018 00:30:22.992 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:22.992 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:22.992 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:22.992 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:22.992 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:22.992 { 00:30:22.992 "uuid": "13465677-19d3-430d-b23f-790b2dfa1018", 00:30:22.992 "name": "lvs_0", 00:30:22.992 "base_bdev": "Nvme0n1", 00:30:22.992 "total_data_clusters": 930, 00:30:22.992 "free_clusters": 930, 00:30:22.992 "block_size": 512, 00:30:22.992 "cluster_size": 1073741824 00:30:22.992 } 00:30:22.992 ]' 00:30:22.992 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="13465677-19d3-430d-b23f-790b2dfa1018") .free_clusters' 00:30:22.992 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:30:22.992 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="13465677-19d3-430d-b23f-790b2dfa1018") .cluster_size' 00:30:22.992 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:30:22.992 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:30:22.992 01:16:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:30:22.992 952320 00:30:22.992 01:16:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:22.992 93339e49-7c2f-4a29-8994-62be28032027 00:30:22.992 01:16:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:23.250 01:16:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:23.507 01:16:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:23.764 01:16:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:24.023 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:24.023 fio-3.35 00:30:24.023 Starting 1 thread 00:30:24.023 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.546 00:30:26.546 test: (groupid=0, jobs=1): err= 0: pid=1254537: Sun Jul 14 01:16:15 2024 00:30:26.546 read: IOPS=6128, BW=23.9MiB/s (25.1MB/s)(48.0MiB/2007msec) 00:30:26.546 slat (nsec): min=1886, max=136326, avg=2657.58, stdev=2236.10 00:30:26.546 clat (usec): min=921, max=171530, avg=11551.74, stdev=11537.18 00:30:26.546 lat (usec): min=924, max=171567, avg=11554.40, stdev=11537.50 00:30:26.546 clat percentiles (msec): 00:30:26.546 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:30:26.546 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:30:26.546 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:30:26.546 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:26.546 | 99.99th=[ 171] 00:30:26.546 bw ( KiB/s): min=17224, max=27088, per=99.77%, avg=24458.00, stdev=4826.28, samples=4 00:30:26.546 iops : min= 4306, max= 6772, avg=6114.50, stdev=1206.57, samples=4 00:30:26.546 write: IOPS=6109, BW=23.9MiB/s (25.0MB/s)(47.9MiB/2007msec); 0 zone resets 00:30:26.546 slat (usec): min=2, max=109, avg= 2.80, stdev= 1.82 00:30:26.547 clat (usec): min=323, max=169708, avg=9237.89, stdev=10855.79 00:30:26.547 lat (usec): min=326, max=169714, avg=9240.69, stdev=10856.07 00:30:26.547 clat percentiles (msec): 00:30:26.547 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:30:26.547 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:26.547 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:30:26.547 | 99.00th=[ 11], 99.50th=[ 15], 99.90th=[ 169], 99.95th=[ 169], 00:30:26.547 | 99.99th=[ 169] 00:30:26.547 bw ( KiB/s): min=18280, max=26616, per=99.89%, avg=24410.00, stdev=4088.58, samples=4 00:30:26.547 iops : min= 4570, max= 6654, avg=6102.50, stdev=1022.14, samples=4 00:30:26.547 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:26.547 lat (msec) : 2=0.02%, 4=0.13%, 10=58.74%, 20=40.56%, 250=0.52% 00:30:26.547 cpu : usr=52.49%, sys=42.42%, ctx=96, majf=0, minf=32 00:30:26.547 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:26.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:26.547 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:26.547 issued rwts: total=12300,12261,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:26.547 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:26.547 00:30:26.547 Run status group 0 (all jobs): 00:30:26.547 READ: bw=23.9MiB/s (25.1MB/s), 23.9MiB/s-23.9MiB/s (25.1MB/s-25.1MB/s), io=48.0MiB (50.4MB), run=2007-2007msec 00:30:26.547 WRITE: bw=23.9MiB/s (25.0MB/s), 23.9MiB/s-23.9MiB/s (25.0MB/s-25.0MB/s), io=47.9MiB (50.2MB), run=2007-2007msec 00:30:26.547 01:16:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:26.805 01:16:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:28.190 01:16:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=1e86dfbe-77c4-4b0b-a407-bc53ce813883 00:30:28.190 01:16:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 1e86dfbe-77c4-4b0b-a407-bc53ce813883 00:30:28.190 01:16:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=1e86dfbe-77c4-4b0b-a407-bc53ce813883 00:30:28.190 01:16:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:28.190 01:16:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:28.190 01:16:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:28.190 01:16:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:28.190 01:16:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:28.190 { 00:30:28.190 "uuid": "13465677-19d3-430d-b23f-790b2dfa1018", 00:30:28.190 "name": "lvs_0", 00:30:28.190 "base_bdev": "Nvme0n1", 00:30:28.190 "total_data_clusters": 930, 00:30:28.190 "free_clusters": 0, 00:30:28.190 "block_size": 512, 00:30:28.190 "cluster_size": 1073741824 00:30:28.190 }, 00:30:28.190 { 00:30:28.190 "uuid": "1e86dfbe-77c4-4b0b-a407-bc53ce813883", 00:30:28.190 "name": "lvs_n_0", 00:30:28.190 "base_bdev": "93339e49-7c2f-4a29-8994-62be28032027", 00:30:28.190 "total_data_clusters": 237847, 00:30:28.190 "free_clusters": 237847, 00:30:28.190 "block_size": 512, 00:30:28.190 "cluster_size": 4194304 00:30:28.190 } 00:30:28.190 ]' 00:30:28.190 01:16:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="1e86dfbe-77c4-4b0b-a407-bc53ce813883") .free_clusters' 00:30:28.190 01:16:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:30:28.190 01:16:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="1e86dfbe-77c4-4b0b-a407-bc53ce813883") .cluster_size' 00:30:28.190 01:16:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:28.190 01:16:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:30:28.190 01:16:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:30:28.190 951388 00:30:28.190 01:16:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:28.790 615e921d-4d8f-4733-b0af-b29f1e66d9ab 00:30:28.790 01:16:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:29.048 01:16:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:29.306 01:16:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:29.564 01:16:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:29.824 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:29.824 fio-3.35 00:30:29.824 Starting 1 thread 00:30:29.824 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.354 00:30:32.354 test: (groupid=0, jobs=1): err= 0: pid=1255271: Sun Jul 14 01:16:21 2024 00:30:32.354 read: IOPS=5791, BW=22.6MiB/s (23.7MB/s)(45.4MiB/2007msec) 00:30:32.354 slat (usec): min=2, max=170, avg= 2.71, stdev= 2.47 00:30:32.354 clat (usec): min=4793, max=20931, avg=12251.74, stdev=1019.09 00:30:32.354 lat (usec): min=4800, max=20934, avg=12254.45, stdev=1018.95 00:30:32.354 clat percentiles (usec): 00:30:32.354 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[11076], 20.00th=[11469], 00:30:32.354 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:30:32.354 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13435], 95.00th=[13829], 00:30:32.354 | 99.00th=[14615], 99.50th=[14746], 99.90th=[18744], 99.95th=[20055], 00:30:32.354 | 99.99th=[20841] 00:30:32.354 bw ( KiB/s): min=21896, max=23656, per=99.66%, avg=23088.00, stdev=807.59, samples=4 00:30:32.354 iops : min= 5474, max= 5914, avg=5772.00, stdev=201.90, samples=4 00:30:32.354 write: IOPS=5772, BW=22.5MiB/s (23.6MB/s)(45.3MiB/2007msec); 0 zone resets 00:30:32.354 slat (usec): min=2, max=134, avg= 2.87, stdev= 1.96 00:30:32.354 clat (usec): min=2406, max=17270, avg=9735.78, stdev=880.16 00:30:32.354 lat (usec): min=2415, max=17273, avg=9738.66, stdev=880.07 00:30:32.354 clat percentiles (usec): 00:30:32.354 | 1.00th=[ 7635], 5.00th=[ 8356], 10.00th=[ 8717], 20.00th=[ 9110], 00:30:32.354 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:30:32.354 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:30:32.354 | 99.00th=[11731], 99.50th=[11994], 99.90th=[14091], 99.95th=[15664], 00:30:32.354 | 99.99th=[17171] 00:30:32.354 bw ( KiB/s): min=22936, max=23232, per=99.94%, avg=23078.00, stdev=144.20, samples=4 00:30:32.354 iops : min= 5734, max= 5808, avg=5769.50, stdev=36.05, samples=4 00:30:32.354 lat (msec) : 4=0.05%, 10=31.99%, 20=67.94%, 50=0.02% 00:30:32.354 cpu : usr=53.94%, sys=41.28%, ctx=87, majf=0, minf=32 00:30:32.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:32.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:32.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:32.354 issued rwts: total=11624,11586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:32.354 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:32.354 00:30:32.354 Run status group 0 (all jobs): 00:30:32.354 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.4MiB (47.6MB), run=2007-2007msec 00:30:32.354 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.3MiB (47.5MB), run=2007-2007msec 00:30:32.354 01:16:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:32.354 01:16:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:32.354 01:16:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:36.541 01:16:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:36.541 01:16:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:39.830 01:16:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:39.830 01:16:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:41.731 01:16:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:41.731 01:16:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:41.731 01:16:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:41.731 01:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:41.731 01:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:41.731 01:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:41.731 01:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:41.731 01:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:41.732 01:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:41.732 rmmod nvme_tcp 00:30:41.732 rmmod nvme_fabrics 00:30:41.732 rmmod nvme_keyring 00:30:41.732 01:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:41.732 01:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:41.732 01:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:41.732 01:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1252548 ']' 00:30:41.732 01:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1252548 00:30:41.732 01:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1252548 ']' 00:30:41.732 01:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1252548 00:30:41.732 01:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:30:41.732 01:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:41.732 01:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1252548 00:30:41.732 01:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:41.732 01:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:41.732 01:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1252548' 00:30:41.732 killing process with pid 1252548 00:30:41.732 01:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1252548 00:30:41.732 01:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1252548 00:30:41.991 01:16:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:41.991 01:16:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:41.991 01:16:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:41.991 01:16:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:41.991 01:16:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:41.991 01:16:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.991 01:16:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:41.991 01:16:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.898 01:16:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:43.899 00:30:43.899 real 0m36.973s 00:30:43.899 user 2m20.886s 00:30:43.899 sys 0m7.478s 00:30:43.899 01:16:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:43.899 01:16:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.899 ************************************ 00:30:43.899 END TEST nvmf_fio_host 00:30:43.899 ************************************ 00:30:43.899 01:16:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:43.899 01:16:33 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:43.899 01:16:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:43.899 01:16:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:43.899 01:16:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:43.899 ************************************ 00:30:43.899 START TEST nvmf_failover 00:30:43.899 ************************************ 00:30:43.899 01:16:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:44.157 * Looking for test storage... 00:30:44.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.157 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:44.158 01:16:33 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:46.064 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:46.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:46.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:46.065 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:46.065 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:46.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:46.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:30:46.065 00:30:46.065 --- 10.0.0.2 ping statistics --- 00:30:46.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.065 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:46.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:46.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:30:46.065 00:30:46.065 --- 10.0.0.1 ping statistics --- 00:30:46.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.065 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:46.065 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:46.322 01:16:35 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:46.322 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:46.322 01:16:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:46.322 01:16:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:46.322 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1258632 00:30:46.322 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:46.322 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1258632 00:30:46.322 01:16:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1258632 ']' 00:30:46.322 01:16:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.322 01:16:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:46.322 01:16:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.322 01:16:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:46.322 01:16:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:46.322 [2024-07-14 01:16:35.535654] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:30:46.322 [2024-07-14 01:16:35.535727] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:46.322 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.322 [2024-07-14 01:16:35.603011] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:46.322 [2024-07-14 01:16:35.694157] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:46.322 [2024-07-14 01:16:35.694218] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:46.322 [2024-07-14 01:16:35.694244] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:46.322 [2024-07-14 01:16:35.694258] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:46.322 [2024-07-14 01:16:35.694269] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:46.322 [2024-07-14 01:16:35.694354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:46.322 [2024-07-14 01:16:35.697886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:46.322 [2024-07-14 01:16:35.697899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.581 01:16:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:46.581 01:16:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:46.581 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:46.581 01:16:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:46.581 01:16:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:46.581 01:16:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:46.581 01:16:35 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:46.839 [2024-07-14 01:16:36.060458] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.839 01:16:36 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:47.097 Malloc0 00:30:47.097 01:16:36 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:47.356 01:16:36 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:47.642 01:16:36 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:47.899 [2024-07-14 01:16:37.080479] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:47.899 01:16:37 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:48.157 [2024-07-14 01:16:37.325286] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:48.157 01:16:37 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:48.415 [2024-07-14 01:16:37.582283] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:48.415 01:16:37 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1258922 00:30:48.415 01:16:37 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:48.415 01:16:37 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:48.415 01:16:37 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1258922 /var/tmp/bdevperf.sock 00:30:48.415 01:16:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1258922 ']' 00:30:48.415 01:16:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:48.415 01:16:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:48.415 01:16:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:48.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:48.415 01:16:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:48.415 01:16:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:48.674 01:16:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:48.674 01:16:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:48.674 01:16:37 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:48.932 NVMe0n1 00:30:48.932 01:16:38 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:49.500 00:30:49.500 01:16:38 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1259043 00:30:49.500 01:16:38 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:49.500 01:16:38 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:50.437 01:16:39 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:50.696 [2024-07-14 01:16:39.960981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.696 [2024-07-14 01:16:39.961698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.961987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 [2024-07-14 01:16:39.962599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x996270 is same with the state(5) to be set 00:30:50.697 01:16:39 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:53.991 01:16:42 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:53.991 00:30:54.250 01:16:43 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:54.250 [2024-07-14 01:16:43.643937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.250 [2024-07-14 01:16:43.644399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 [2024-07-14 01:16:43.644904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x997060 is same with the state(5) to be set 00:30:54.251 01:16:43 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:57.533 01:16:46 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:57.533 [2024-07-14 01:16:46.944042] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.793 01:16:46 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:58.727 01:16:47 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:58.988 [2024-07-14 01:16:48.201485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.201997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 [2024-07-14 01:16:48.202383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x998680 is same with the state(5) to be set 00:30:58.988 01:16:48 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1259043 00:31:05.558 0 00:31:05.558 01:16:53 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1258922 00:31:05.558 01:16:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1258922 ']' 00:31:05.558 01:16:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1258922 00:31:05.558 01:16:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:05.558 01:16:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:05.559 01:16:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1258922 00:31:05.559 01:16:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:05.559 01:16:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:05.559 01:16:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1258922' 00:31:05.559 killing process with pid 1258922 00:31:05.559 01:16:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1258922 00:31:05.559 01:16:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1258922 00:31:05.559 01:16:54 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:05.559 [2024-07-14 01:16:37.645538] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:31:05.559 [2024-07-14 01:16:37.645637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258922 ] 00:31:05.559 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.559 [2024-07-14 01:16:37.705319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.559 [2024-07-14 01:16:37.792690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:05.559 Running I/O for 15 seconds... 00:31:05.559 [2024-07-14 01:16:39.963691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.963731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.963760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.963775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.963791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.963804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.963819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.963832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.963846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.963859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.963923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.963939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.963953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.963967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.963982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.963995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.559 [2024-07-14 01:16:39.964540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.559 [2024-07-14 01:16:39.964556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.964569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.964583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.964598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.964613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.964627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.964643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.964656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.964671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.964684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.964698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.964711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.964726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.964739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.964753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.964766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.964780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.964793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.964810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.964824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.964838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.964872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.964890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.964904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.964919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.964932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.964947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.964967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.964983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.964997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.560 [2024-07-14 01:16:39.965600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.560 [2024-07-14 01:16:39.965612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.965626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.561 [2024-07-14 01:16:39.965639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.965654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.965668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.965682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.965695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.965709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.965722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.965736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.965748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.965762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.965775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.965789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.965802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.965817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.965843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.965859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.965883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.965915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.965929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.965944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.965958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.965977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.965991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.561 [2024-07-14 01:16:39.966668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.561 [2024-07-14 01:16:39.966684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.562 [2024-07-14 01:16:39.966697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.966715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.562 [2024-07-14 01:16:39.966730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.966744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.562 [2024-07-14 01:16:39.966758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.966773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.562 [2024-07-14 01:16:39.966786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.966801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.562 [2024-07-14 01:16:39.966813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.966828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.562 [2024-07-14 01:16:39.966842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.966857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.562 [2024-07-14 01:16:39.966892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.966910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.562 [2024-07-14 01:16:39.966924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.966940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.562 [2024-07-14 01:16:39.966954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.966969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.562 [2024-07-14 01:16:39.966983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.966999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.562 [2024-07-14 01:16:39.967013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.967028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.562 [2024-07-14 01:16:39.967042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.967057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.562 [2024-07-14 01:16:39.967070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.967086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.562 [2024-07-14 01:16:39.967106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.967139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.562 [2024-07-14 01:16:39.967156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76792 len:8 PRP1 0x0 PRP2 0x0 00:31:05.562 [2024-07-14 01:16:39.967170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.967204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.562 [2024-07-14 01:16:39.967216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.562 [2024-07-14 01:16:39.967228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76800 len:8 PRP1 0x0 PRP2 0x0 00:31:05.562 [2024-07-14 01:16:39.967246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.967260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.562 [2024-07-14 01:16:39.967271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.562 [2024-07-14 01:16:39.967282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76808 len:8 PRP1 0x0 PRP2 0x0 00:31:05.562 [2024-07-14 01:16:39.967294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.967308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.562 [2024-07-14 01:16:39.967318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.562 [2024-07-14 01:16:39.967330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76816 len:8 PRP1 0x0 PRP2 0x0 00:31:05.562 [2024-07-14 01:16:39.967343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.967356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.562 [2024-07-14 01:16:39.967367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.562 [2024-07-14 01:16:39.967378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76824 len:8 PRP1 0x0 PRP2 0x0 00:31:05.562 [2024-07-14 01:16:39.967390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.967404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.562 [2024-07-14 01:16:39.967414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.562 [2024-07-14 01:16:39.967425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76832 len:8 PRP1 0x0 PRP2 0x0 00:31:05.562 [2024-07-14 01:16:39.967438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.967451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.562 [2024-07-14 01:16:39.967461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.562 [2024-07-14 01:16:39.967472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76840 len:8 PRP1 0x0 PRP2 0x0 00:31:05.562 [2024-07-14 01:16:39.967485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.967498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.562 [2024-07-14 01:16:39.967508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.562 [2024-07-14 01:16:39.967524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76848 len:8 PRP1 0x0 PRP2 0x0 00:31:05.562 [2024-07-14 01:16:39.967541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.967555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.562 [2024-07-14 01:16:39.967565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.562 [2024-07-14 01:16:39.967576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76856 len:8 PRP1 0x0 PRP2 0x0 00:31:05.562 [2024-07-14 01:16:39.967589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.967601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.562 [2024-07-14 01:16:39.967612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.562 [2024-07-14 01:16:39.967622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76864 len:8 PRP1 0x0 PRP2 0x0 00:31:05.562 [2024-07-14 01:16:39.967640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.967654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.562 [2024-07-14 01:16:39.967665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.562 [2024-07-14 01:16:39.967676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76872 len:8 PRP1 0x0 PRP2 0x0 00:31:05.562 [2024-07-14 01:16:39.967688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.562 [2024-07-14 01:16:39.967701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.562 [2024-07-14 01:16:39.967711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.563 [2024-07-14 01:16:39.967722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76880 len:8 PRP1 0x0 PRP2 0x0 00:31:05.563 [2024-07-14 01:16:39.967734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:39.967747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.563 [2024-07-14 01:16:39.967757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.563 [2024-07-14 01:16:39.967768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76888 len:8 PRP1 0x0 PRP2 0x0 00:31:05.563 [2024-07-14 01:16:39.967781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:39.967794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.563 [2024-07-14 01:16:39.967804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.563 [2024-07-14 01:16:39.967815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76896 len:8 PRP1 0x0 PRP2 0x0 00:31:05.563 [2024-07-14 01:16:39.967827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:39.967840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.563 [2024-07-14 01:16:39.967851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.563 [2024-07-14 01:16:39.967862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76904 len:8 PRP1 0x0 PRP2 0x0 00:31:05.563 [2024-07-14 01:16:39.967896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:39.967911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.563 [2024-07-14 01:16:39.967922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.563 [2024-07-14 01:16:39.967941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76912 len:8 PRP1 0x0 PRP2 0x0 00:31:05.563 [2024-07-14 01:16:39.967955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:39.968014] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9fd760 was disconnected and freed. reset controller. 00:31:05.563 [2024-07-14 01:16:39.968033] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:05.563 [2024-07-14 01:16:39.968066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.563 [2024-07-14 01:16:39.968084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:39.968100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.563 [2024-07-14 01:16:39.968113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:39.968126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.563 [2024-07-14 01:16:39.968145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:39.968160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.563 [2024-07-14 01:16:39.968173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:39.968186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:05.563 [2024-07-14 01:16:39.971469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:05.563 [2024-07-14 01:16:39.971507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c9830 (9): Bad file descriptor 00:31:05.563 [2024-07-14 01:16:40.138537] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:05.563 [2024-07-14 01:16:43.646154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.563 [2024-07-14 01:16:43.646214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:43.646245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.563 [2024-07-14 01:16:43.646261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:43.646277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.563 [2024-07-14 01:16:43.646291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:43.646307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.563 [2024-07-14 01:16:43.646321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:43.646336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.563 [2024-07-14 01:16:43.646349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:43.646365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.563 [2024-07-14 01:16:43.646386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:43.646402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.563 [2024-07-14 01:16:43.646417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:43.646433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.563 [2024-07-14 01:16:43.646448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:43.646464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.563 [2024-07-14 01:16:43.646478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:43.646493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.563 [2024-07-14 01:16:43.646506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:43.646520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.563 [2024-07-14 01:16:43.646534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:43.646548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.563 [2024-07-14 01:16:43.646562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:43.646576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.563 [2024-07-14 01:16:43.646590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:43.646605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.563 [2024-07-14 01:16:43.646619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:43.646634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.563 [2024-07-14 01:16:43.646647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:43.646662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.563 [2024-07-14 01:16:43.646676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.563 [2024-07-14 01:16:43.646690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.564 [2024-07-14 01:16:43.646704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.564 [2024-07-14 01:16:43.646719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.564 [2024-07-14 01:16:43.646732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.564 [2024-07-14 01:16:43.646751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.564 [2024-07-14 01:16:43.646765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.564 [2024-07-14 01:16:43.646780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.564 [2024-07-14 01:16:43.646794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.564 [2024-07-14 01:16:43.646809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.564 [2024-07-14 01:16:43.646823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.564 [2024-07-14 01:16:43.646838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.564 [2024-07-14 01:16:43.646852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.646891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.565 [2024-07-14 01:16:43.646910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.646926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.565 [2024-07-14 01:16:43.646940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.646956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.565 [2024-07-14 01:16:43.646969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.646986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.565 [2024-07-14 01:16:43.647001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.647017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.565 [2024-07-14 01:16:43.647031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.647046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.565 [2024-07-14 01:16:43.647060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.647075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.565 [2024-07-14 01:16:43.647088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.647104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.565 [2024-07-14 01:16:43.647117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.647133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.565 [2024-07-14 01:16:43.647150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.647166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.565 [2024-07-14 01:16:43.647195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.647211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.565 [2024-07-14 01:16:43.647224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.647239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.565 [2024-07-14 01:16:43.647253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.647267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.565 [2024-07-14 01:16:43.647281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.647296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.565 [2024-07-14 01:16:43.647309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.647324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.565 [2024-07-14 01:16:43.647337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.647352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.565 [2024-07-14 01:16:43.647366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.647380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.565 [2024-07-14 01:16:43.647394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.647409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.565 [2024-07-14 01:16:43.647423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.565 [2024-07-14 01:16:43.647438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.565 [2024-07-14 01:16:43.647451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.647465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.647479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.647494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.647508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.647526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.647539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.647554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.647568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.647582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.647595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.647610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.647623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.647653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.647667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.647682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.647696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.647711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.647724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.647739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.647753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.647769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.647782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.647797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.647811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.647826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.647840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.647855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.647876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.647893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.647911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.647927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.647941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.647956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.647971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.647986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.647999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.648015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.648029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.648044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.648057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.648073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.648086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.648101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.648115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.648130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.648143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.648159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.648172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.648188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.648216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.648231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.648245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.648259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.648272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.648290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.648304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.648318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.648331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.648346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.648360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.648375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.648388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.648403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.648417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.648432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.648445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.648460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.566 [2024-07-14 01:16:43.648474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.566 [2024-07-14 01:16:43.648489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.648502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.648517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.648531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.648545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.648559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.648574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.648588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.648603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.648616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.648631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.648644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.648662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.648677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.648692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.567 [2024-07-14 01:16:43.648706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.648720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.567 [2024-07-14 01:16:43.648733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.648748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.567 [2024-07-14 01:16:43.648761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.648776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.567 [2024-07-14 01:16:43.648789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.648804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.567 [2024-07-14 01:16:43.648817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.648837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.567 [2024-07-14 01:16:43.648884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.648901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.567 [2024-07-14 01:16:43.648915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.648930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.648944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.648959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.648972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.648988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.567 [2024-07-14 01:16:43.649598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.567 [2024-07-14 01:16:43.649613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.568 [2024-07-14 01:16:43.649626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.649641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.568 [2024-07-14 01:16:43.649654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.649669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.568 [2024-07-14 01:16:43.649682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.649697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.568 [2024-07-14 01:16:43.649710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.649724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.568 [2024-07-14 01:16:43.649737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.649752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.568 [2024-07-14 01:16:43.649765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.649795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.568 [2024-07-14 01:16:43.649814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114072 len:8 PRP1 0x0 PRP2 0x0 00:31:05.568 [2024-07-14 01:16:43.649829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.649863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.568 [2024-07-14 01:16:43.649895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.568 [2024-07-14 01:16:43.649907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114080 len:8 PRP1 0x0 PRP2 0x0 00:31:05.568 [2024-07-14 01:16:43.649920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.649934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.568 [2024-07-14 01:16:43.649945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.568 [2024-07-14 01:16:43.649956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114088 len:8 PRP1 0x0 PRP2 0x0 00:31:05.568 [2024-07-14 01:16:43.649969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.649983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.568 [2024-07-14 01:16:43.649993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.568 [2024-07-14 01:16:43.650005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114096 len:8 PRP1 0x0 PRP2 0x0 00:31:05.568 [2024-07-14 01:16:43.650017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.650031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.568 [2024-07-14 01:16:43.650042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.568 [2024-07-14 01:16:43.650053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114104 len:8 PRP1 0x0 PRP2 0x0 00:31:05.568 [2024-07-14 01:16:43.650065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.650079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.568 [2024-07-14 01:16:43.650090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.568 [2024-07-14 01:16:43.650101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114112 len:8 PRP1 0x0 PRP2 0x0 00:31:05.568 [2024-07-14 01:16:43.650113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.650126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.568 [2024-07-14 01:16:43.650138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.568 [2024-07-14 01:16:43.650149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114120 len:8 PRP1 0x0 PRP2 0x0 00:31:05.568 [2024-07-14 01:16:43.650162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.650203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.568 [2024-07-14 01:16:43.650213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.568 [2024-07-14 01:16:43.650225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114128 len:8 PRP1 0x0 PRP2 0x0 00:31:05.568 [2024-07-14 01:16:43.650237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.650250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.568 [2024-07-14 01:16:43.650264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.568 [2024-07-14 01:16:43.650276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114136 len:8 PRP1 0x0 PRP2 0x0 00:31:05.568 [2024-07-14 01:16:43.650288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.650301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.568 [2024-07-14 01:16:43.650312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.568 [2024-07-14 01:16:43.650323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114144 len:8 PRP1 0x0 PRP2 0x0 00:31:05.568 [2024-07-14 01:16:43.650335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.650393] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9f9300 was disconnected and freed. reset controller. 00:31:05.568 [2024-07-14 01:16:43.650411] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:05.568 [2024-07-14 01:16:43.650460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.568 [2024-07-14 01:16:43.650479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.650495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.568 [2024-07-14 01:16:43.650508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.650522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.568 [2024-07-14 01:16:43.650536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.650550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.568 [2024-07-14 01:16:43.650563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:43.650577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:05.568 [2024-07-14 01:16:43.650631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c9830 (9): Bad file descriptor 00:31:05.568 [2024-07-14 01:16:43.653913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:05.568 [2024-07-14 01:16:43.773054] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:05.568 [2024-07-14 01:16:48.203280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.568 [2024-07-14 01:16:48.203321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:48.203351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.568 [2024-07-14 01:16:48.203367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.568 [2024-07-14 01:16:48.203383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.569 [2024-07-14 01:16:48.203397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.569 [2024-07-14 01:16:48.203441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.569 [2024-07-14 01:16:48.203472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.569 [2024-07-14 01:16:48.203499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.569 [2024-07-14 01:16:48.203527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.569 [2024-07-14 01:16:48.203554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.569 [2024-07-14 01:16:48.203582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.569 [2024-07-14 01:16:48.203609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.203637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.203664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.203690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.203717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.203744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.203773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.203807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.203836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.203863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.203916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.203945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.203973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.203987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.204001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.204015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.204028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.204043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.204056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.204070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.204083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.204098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.204111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.204125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.204139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.204154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.204171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.204200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.204213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.204227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.204240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.204254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.204266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.204281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.204293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.204308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.204321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.204336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.569 [2024-07-14 01:16:48.204348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.569 [2024-07-14 01:16:48.204362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.570 [2024-07-14 01:16:48.204375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.570 [2024-07-14 01:16:48.204402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.570 [2024-07-14 01:16:48.204430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.570 [2024-07-14 01:16:48.204457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.570 [2024-07-14 01:16:48.204485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.570 [2024-07-14 01:16:48.204512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.570 [2024-07-14 01:16:48.204544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.570 [2024-07-14 01:16:48.204571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.570 [2024-07-14 01:16:48.204598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.570 [2024-07-14 01:16:48.204626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.570 [2024-07-14 01:16:48.204653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.570 [2024-07-14 01:16:48.204681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.570 [2024-07-14 01:16:48.204707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.570 [2024-07-14 01:16:48.204735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.570 [2024-07-14 01:16:48.204763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.570 [2024-07-14 01:16:48.204806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.570 [2024-07-14 01:16:48.204834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.570 [2024-07-14 01:16:48.204861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.570 [2024-07-14 01:16:48.204914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.570 [2024-07-14 01:16:48.204948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.570 [2024-07-14 01:16:48.204977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.204993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.570 [2024-07-14 01:16:48.205006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.205021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.570 [2024-07-14 01:16:48.205034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.205049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.570 [2024-07-14 01:16:48.205063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.205078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.570 [2024-07-14 01:16:48.205091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.205121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.570 [2024-07-14 01:16:48.205134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.205149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.570 [2024-07-14 01:16:48.205162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.205191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.570 [2024-07-14 01:16:48.205204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.205218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.570 [2024-07-14 01:16:48.205232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.570 [2024-07-14 01:16:48.205246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.570 [2024-07-14 01:16:48.205259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.571 [2024-07-14 01:16:48.205286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.205980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.205996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.206010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.206025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.206039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.206055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.206072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.206088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.206102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.206118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.571 [2024-07-14 01:16:48.206132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.206147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.571 [2024-07-14 01:16:48.206161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.206191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.571 [2024-07-14 01:16:48.206205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.206219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.571 [2024-07-14 01:16:48.206233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.206249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.571 [2024-07-14 01:16:48.206263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.206279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.571 [2024-07-14 01:16:48.206293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.571 [2024-07-14 01:16:48.206308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.571 [2024-07-14 01:16:48.206322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.572 [2024-07-14 01:16:48.206351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.572 [2024-07-14 01:16:48.206379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.572 [2024-07-14 01:16:48.206407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.572 [2024-07-14 01:16:48.206436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.572 [2024-07-14 01:16:48.206467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.572 [2024-07-14 01:16:48.206496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.572 [2024-07-14 01:16:48.206525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.572 [2024-07-14 01:16:48.206553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.572 [2024-07-14 01:16:48.206582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.572 [2024-07-14 01:16:48.206611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.572 [2024-07-14 01:16:48.206639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.572 [2024-07-14 01:16:48.206668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.572 [2024-07-14 01:16:48.206697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.572 [2024-07-14 01:16:48.206726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.572 [2024-07-14 01:16:48.206754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.572 [2024-07-14 01:16:48.206783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.572 [2024-07-14 01:16:48.206813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.572 [2024-07-14 01:16:48.206846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.572 [2024-07-14 01:16:48.206917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72168 len:8 PRP1 0x0 PRP2 0x0 00:31:05.572 [2024-07-14 01:16:48.206931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.206949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.572 [2024-07-14 01:16:48.206961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.572 [2024-07-14 01:16:48.206973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72176 len:8 PRP1 0x0 PRP2 0x0 00:31:05.572 [2024-07-14 01:16:48.206986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.207000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.572 [2024-07-14 01:16:48.207011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.572 [2024-07-14 01:16:48.207022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72184 len:8 PRP1 0x0 PRP2 0x0 00:31:05.572 [2024-07-14 01:16:48.207035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.207049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.572 [2024-07-14 01:16:48.207060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.572 [2024-07-14 01:16:48.207071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72192 len:8 PRP1 0x0 PRP2 0x0 00:31:05.572 [2024-07-14 01:16:48.207084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.207097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.572 [2024-07-14 01:16:48.207108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.572 [2024-07-14 01:16:48.207119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72200 len:8 PRP1 0x0 PRP2 0x0 00:31:05.572 [2024-07-14 01:16:48.207132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.207146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.572 [2024-07-14 01:16:48.207157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.572 [2024-07-14 01:16:48.207168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72208 len:8 PRP1 0x0 PRP2 0x0 00:31:05.572 [2024-07-14 01:16:48.207197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.207211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.572 [2024-07-14 01:16:48.207221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.572 [2024-07-14 01:16:48.207232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72216 len:8 PRP1 0x0 PRP2 0x0 00:31:05.572 [2024-07-14 01:16:48.207245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.207258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.572 [2024-07-14 01:16:48.207273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.572 [2024-07-14 01:16:48.207284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72224 len:8 PRP1 0x0 PRP2 0x0 00:31:05.572 [2024-07-14 01:16:48.207297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.207310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.572 [2024-07-14 01:16:48.207321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.572 [2024-07-14 01:16:48.207332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72232 len:8 PRP1 0x0 PRP2 0x0 00:31:05.572 [2024-07-14 01:16:48.207344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.207357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.572 [2024-07-14 01:16:48.207368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.572 [2024-07-14 01:16:48.207379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72240 len:8 PRP1 0x0 PRP2 0x0 00:31:05.572 [2024-07-14 01:16:48.207391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.572 [2024-07-14 01:16:48.207449] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9f9300 was disconnected and freed. reset controller. 00:31:05.572 [2024-07-14 01:16:48.207467] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:05.573 [2024-07-14 01:16:48.207516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.573 [2024-07-14 01:16:48.207535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.573 [2024-07-14 01:16:48.207557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.573 [2024-07-14 01:16:48.207572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.573 [2024-07-14 01:16:48.207587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.573 [2024-07-14 01:16:48.207600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.573 [2024-07-14 01:16:48.207614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.573 [2024-07-14 01:16:48.207627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.573 [2024-07-14 01:16:48.207641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:05.573 [2024-07-14 01:16:48.207698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c9830 (9): Bad file descriptor 00:31:05.573 [2024-07-14 01:16:48.210974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:05.573 [2024-07-14 01:16:48.409585] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:05.573 00:31:05.573 Latency(us) 00:31:05.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.573 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:05.573 Verification LBA range: start 0x0 length 0x4000 00:31:05.573 NVMe0n1 : 15.01 8504.95 33.22 1265.22 0.00 13074.03 831.34 18932.62 00:31:05.573 =================================================================================================================== 00:31:05.573 Total : 8504.95 33.22 1265.22 0.00 13074.03 831.34 18932.62 00:31:05.573 Received shutdown signal, test time was about 15.000000 seconds 00:31:05.573 00:31:05.573 Latency(us) 00:31:05.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:05.573 =================================================================================================================== 00:31:05.573 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:05.573 01:16:54 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:05.573 01:16:54 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:05.573 01:16:54 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:05.573 01:16:54 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1260772 00:31:05.573 01:16:54 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:05.573 01:16:54 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1260772 /var/tmp/bdevperf.sock 00:31:05.573 01:16:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1260772 ']' 00:31:05.573 01:16:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:05.573 01:16:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:05.573 01:16:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:05.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:05.573 01:16:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:05.573 01:16:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:05.573 01:16:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:05.573 01:16:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:31:05.573 01:16:54 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:05.573 [2024-07-14 01:16:54.626077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:05.573 01:16:54 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:05.573 [2024-07-14 01:16:54.870784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:05.573 01:16:54 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:05.831 NVMe0n1 00:31:05.831 01:16:55 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:06.396 00:31:06.396 01:16:55 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:06.654 00:31:06.654 01:16:55 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:06.654 01:16:55 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:06.911 01:16:56 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:07.200 01:16:56 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:10.494 01:16:59 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:10.494 01:16:59 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:10.494 01:16:59 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1261438 00:31:10.494 01:16:59 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:10.494 01:16:59 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1261438 00:31:11.430 0 00:31:11.430 01:17:00 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:11.430 [2024-07-14 01:16:54.146915] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:31:11.430 [2024-07-14 01:16:54.147020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260772 ] 00:31:11.430 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.430 [2024-07-14 01:16:54.207678] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.430 [2024-07-14 01:16:54.290580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.430 [2024-07-14 01:16:56.402692] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:11.430 [2024-07-14 01:16:56.402785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.430 [2024-07-14 01:16:56.402809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.430 [2024-07-14 01:16:56.402825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.430 [2024-07-14 01:16:56.402839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.430 [2024-07-14 01:16:56.402853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.430 [2024-07-14 01:16:56.402873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.430 [2024-07-14 01:16:56.402889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:11.430 [2024-07-14 01:16:56.402904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:11.430 [2024-07-14 01:16:56.402917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:11.430 [2024-07-14 01:16:56.402967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x792830 (9): Bad file descriptor 00:31:11.430 [2024-07-14 01:16:56.403003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:11.430 [2024-07-14 01:16:56.447679] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:11.430 Running I/O for 1 seconds... 00:31:11.430 00:31:11.430 Latency(us) 00:31:11.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.430 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:11.430 Verification LBA range: start 0x0 length 0x4000 00:31:11.430 NVMe0n1 : 1.01 8738.34 34.13 0.00 0.00 14588.06 2791.35 15728.64 00:31:11.430 =================================================================================================================== 00:31:11.430 Total : 8738.34 34.13 0.00 0.00 14588.06 2791.35 15728.64 00:31:11.431 01:17:00 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:11.431 01:17:00 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:11.689 01:17:01 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:11.949 01:17:01 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:11.949 01:17:01 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:12.207 01:17:01 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:12.465 01:17:01 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:15.754 01:17:04 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:15.754 01:17:04 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:15.754 01:17:05 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1260772 00:31:15.754 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1260772 ']' 00:31:15.754 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1260772 00:31:16.012 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:16.012 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:16.012 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1260772 00:31:16.012 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:16.012 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:16.012 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1260772' 00:31:16.012 killing process with pid 1260772 00:31:16.012 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1260772 00:31:16.012 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1260772 00:31:16.012 01:17:05 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:16.012 01:17:05 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:16.270 01:17:05 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:16.270 01:17:05 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:16.271 01:17:05 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:16.271 01:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:16.271 01:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:16.271 01:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:16.271 01:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:16.271 01:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:16.271 01:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:16.271 rmmod nvme_tcp 00:31:16.271 rmmod nvme_fabrics 00:31:16.271 rmmod nvme_keyring 00:31:16.529 01:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:16.529 01:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:16.529 01:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:16.529 01:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1258632 ']' 00:31:16.529 01:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1258632 00:31:16.529 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1258632 ']' 00:31:16.529 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1258632 00:31:16.529 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:16.529 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:16.529 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1258632 00:31:16.529 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:16.529 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:16.529 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1258632' 00:31:16.529 killing process with pid 1258632 00:31:16.529 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1258632 00:31:16.529 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1258632 00:31:16.787 01:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:16.787 01:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:16.787 01:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:16.787 01:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:16.787 01:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:16.787 01:17:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.787 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:16.787 01:17:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.696 01:17:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:18.696 00:31:18.696 real 0m34.701s 00:31:18.696 user 2m1.938s 00:31:18.696 sys 0m6.049s 00:31:18.696 01:17:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:18.696 01:17:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:18.696 ************************************ 00:31:18.696 END TEST nvmf_failover 00:31:18.696 ************************************ 00:31:18.696 01:17:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:18.696 01:17:08 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:18.696 01:17:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:18.696 01:17:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:18.696 01:17:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:18.696 ************************************ 00:31:18.696 START TEST nvmf_host_discovery 00:31:18.696 ************************************ 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:18.696 * Looking for test storage... 00:31:18.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:18.696 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:18.955 01:17:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:20.862 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:20.862 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:20.862 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:20.863 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:20.863 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:20.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:31:20.863 00:31:20.863 --- 10.0.0.2 ping statistics --- 00:31:20.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.863 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:31:20.863 00:31:20.863 --- 10.0.0.1 ping statistics --- 00:31:20.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.863 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1264039 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1264039 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1264039 ']' 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:20.863 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:20.863 [2024-07-14 01:17:10.263942] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:31:20.863 [2024-07-14 01:17:10.264017] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.121 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.121 [2024-07-14 01:17:10.336462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.121 [2024-07-14 01:17:10.431113] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.121 [2024-07-14 01:17:10.431186] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.121 [2024-07-14 01:17:10.431211] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:21.121 [2024-07-14 01:17:10.431225] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:21.121 [2024-07-14 01:17:10.431236] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.121 [2024-07-14 01:17:10.431272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:21.379 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:21.379 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:21.379 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:21.379 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:21.379 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.379 01:17:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.379 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:21.379 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.379 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.379 [2024-07-14 01:17:10.565170] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.380 [2024-07-14 01:17:10.573341] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.380 null0 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.380 null1 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1264178 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1264178 /tmp/host.sock 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1264178 ']' 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:21.380 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:21.380 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.380 [2024-07-14 01:17:10.644431] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:31:21.380 [2024-07-14 01:17:10.644496] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264178 ] 00:31:21.380 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.380 [2024-07-14 01:17:10.706172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.638 [2024-07-14 01:17:10.797253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.638 01:17:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.638 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:21.638 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:21.638 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.638 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.638 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.638 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:21.639 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:21.639 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:21.639 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.639 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.639 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:21.639 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:21.639 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.896 [2024-07-14 01:17:11.211061] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:21.896 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:21.897 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:21.897 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.897 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:31:22.154 01:17:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:22.724 [2024-07-14 01:17:11.943257] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:22.724 [2024-07-14 01:17:11.943301] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:22.724 [2024-07-14 01:17:11.943329] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:22.724 [2024-07-14 01:17:12.029579] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:22.982 [2024-07-14 01:17:12.254817] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:22.982 [2024-07-14 01:17:12.254863] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:22.982 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:22.982 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:22.982 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:22.982 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:22.982 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:22.982 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.982 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:22.982 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:22.982 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:23.241 01:17:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.654 [2024-07-14 01:17:13.698497] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:24.654 [2024-07-14 01:17:13.699414] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:24.654 [2024-07-14 01:17:13.699457] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:24.654 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.655 [2024-07-14 01:17:13.785208] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:24.655 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:24.655 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:24.655 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:24.655 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:24.655 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:24.655 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:24.655 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:24.655 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:24.655 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:24.655 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.655 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:24.655 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:24.655 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:24.655 01:17:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:24.655 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.655 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:24.655 01:17:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:24.655 [2024-07-14 01:17:13.849891] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:24.655 [2024-07-14 01:17:13.849933] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:24.655 [2024-07-14 01:17:13.849943] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.589 [2024-07-14 01:17:14.926312] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:25.589 [2024-07-14 01:17:14.926345] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:25.589 [2024-07-14 01:17:14.928652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:25.589 [2024-07-14 01:17:14.928688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.589 [2024-07-14 01:17:14.928707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:25.589 [2024-07-14 01:17:14.928723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.589 [2024-07-14 01:17:14.928739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:25.589 [2024-07-14 01:17:14.928754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.589 [2024-07-14 01:17:14.928769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:25.589 [2024-07-14 01:17:14.928784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:25.589 [2024-07-14 01:17:14.928800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0530 is same with the state(5) to be set 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:25.589 [2024-07-14 01:17:14.938653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb0530 (9): Bad file descriptor 00:31:25.589 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.589 [2024-07-14 01:17:14.948701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:25.589 [2024-07-14 01:17:14.948971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.589 [2024-07-14 01:17:14.949001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb0530 with addr=10.0.0.2, port=4420 00:31:25.589 [2024-07-14 01:17:14.949019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0530 is same with the state(5) to be set 00:31:25.590 [2024-07-14 01:17:14.949042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb0530 (9): Bad file descriptor 00:31:25.590 [2024-07-14 01:17:14.949065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.590 [2024-07-14 01:17:14.949080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:25.590 [2024-07-14 01:17:14.949096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.590 [2024-07-14 01:17:14.949123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:25.590 [2024-07-14 01:17:14.958790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:25.590 [2024-07-14 01:17:14.959044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.590 [2024-07-14 01:17:14.959072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb0530 with addr=10.0.0.2, port=4420 00:31:25.590 [2024-07-14 01:17:14.959089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0530 is same with the state(5) to be set 00:31:25.590 [2024-07-14 01:17:14.959112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb0530 (9): Bad file descriptor 00:31:25.590 [2024-07-14 01:17:14.959132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.590 [2024-07-14 01:17:14.959146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:25.590 [2024-07-14 01:17:14.959160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.590 [2024-07-14 01:17:14.959179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:25.590 [2024-07-14 01:17:14.968873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:25.590 [2024-07-14 01:17:14.969218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.590 [2024-07-14 01:17:14.969249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb0530 with addr=10.0.0.2, port=4420 00:31:25.590 [2024-07-14 01:17:14.969267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0530 is same with the state(5) to be set 00:31:25.590 [2024-07-14 01:17:14.969292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb0530 (9): Bad file descriptor 00:31:25.590 [2024-07-14 01:17:14.969315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.590 [2024-07-14 01:17:14.969330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:25.590 [2024-07-14 01:17:14.969345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.590 [2024-07-14 01:17:14.969366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:25.590 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:25.590 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:25.590 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:25.590 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:25.590 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:25.590 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:25.590 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:25.590 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:25.590 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.590 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:25.590 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.590 01:17:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.590 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:25.590 01:17:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:25.590 [2024-07-14 01:17:14.978966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:25.590 [2024-07-14 01:17:14.979179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.590 [2024-07-14 01:17:14.979208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb0530 with addr=10.0.0.2, port=4420 00:31:25.590 [2024-07-14 01:17:14.979240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0530 is same with the state(5) to be set 00:31:25.590 [2024-07-14 01:17:14.979262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb0530 (9): Bad file descriptor 00:31:25.590 [2024-07-14 01:17:14.979282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.590 [2024-07-14 01:17:14.979296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:25.590 [2024-07-14 01:17:14.979309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.590 [2024-07-14 01:17:14.979327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:25.590 [2024-07-14 01:17:14.989047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:25.590 [2024-07-14 01:17:14.989296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.590 [2024-07-14 01:17:14.989329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb0530 with addr=10.0.0.2, port=4420 00:31:25.590 [2024-07-14 01:17:14.989348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0530 is same with the state(5) to be set 00:31:25.590 [2024-07-14 01:17:14.989373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb0530 (9): Bad file descriptor 00:31:25.590 [2024-07-14 01:17:14.989397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.590 [2024-07-14 01:17:14.989413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:25.590 [2024-07-14 01:17:14.989428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.590 [2024-07-14 01:17:14.989449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:25.590 [2024-07-14 01:17:14.999121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:25.590 [2024-07-14 01:17:14.999372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.590 [2024-07-14 01:17:14.999404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb0530 with addr=10.0.0.2, port=4420 00:31:25.590 [2024-07-14 01:17:14.999423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0530 is same with the state(5) to be set 00:31:25.590 [2024-07-14 01:17:14.999448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb0530 (9): Bad file descriptor 00:31:25.590 [2024-07-14 01:17:14.999480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.590 [2024-07-14 01:17:14.999511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:25.590 [2024-07-14 01:17:14.999530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.590 [2024-07-14 01:17:14.999553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:25.849 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.849 [2024-07-14 01:17:15.009219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:25.849 [2024-07-14 01:17:15.009484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:25.849 [2024-07-14 01:17:15.009512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bb0530 with addr=10.0.0.2, port=4420 00:31:25.850 [2024-07-14 01:17:15.009529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0530 is same with the state(5) to be set 00:31:25.850 [2024-07-14 01:17:15.009558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb0530 (9): Bad file descriptor 00:31:25.850 [2024-07-14 01:17:15.009580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:25.850 [2024-07-14 01:17:15.009594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:25.850 [2024-07-14 01:17:15.009623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:25.850 [2024-07-14 01:17:15.009643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:25.850 [2024-07-14 01:17:15.012874] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:25.850 [2024-07-14 01:17:15.012921] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.850 01:17:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.229 [2024-07-14 01:17:16.295104] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:27.229 [2024-07-14 01:17:16.295152] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:27.229 [2024-07-14 01:17:16.295176] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:27.229 [2024-07-14 01:17:16.381452] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:27.229 [2024-07-14 01:17:16.449635] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:27.229 [2024-07-14 01:17:16.449683] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.229 request: 00:31:27.229 { 00:31:27.229 "name": "nvme", 00:31:27.229 "trtype": "tcp", 00:31:27.229 "traddr": "10.0.0.2", 00:31:27.229 "adrfam": "ipv4", 00:31:27.229 "trsvcid": "8009", 00:31:27.229 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:27.229 "wait_for_attach": true, 00:31:27.229 "method": "bdev_nvme_start_discovery", 00:31:27.229 "req_id": 1 00:31:27.229 } 00:31:27.229 Got JSON-RPC error response 00:31:27.229 response: 00:31:27.229 { 00:31:27.229 "code": -17, 00:31:27.229 "message": "File exists" 00:31:27.229 } 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.229 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.229 request: 00:31:27.229 { 00:31:27.229 "name": "nvme_second", 00:31:27.229 "trtype": "tcp", 00:31:27.229 "traddr": "10.0.0.2", 00:31:27.229 "adrfam": "ipv4", 00:31:27.229 "trsvcid": "8009", 00:31:27.229 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:27.229 "wait_for_attach": true, 00:31:27.229 "method": "bdev_nvme_start_discovery", 00:31:27.229 "req_id": 1 00:31:27.229 } 00:31:27.229 Got JSON-RPC error response 00:31:27.229 response: 00:31:27.230 { 00:31:27.230 "code": -17, 00:31:27.230 "message": "File exists" 00:31:27.230 } 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:27.230 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.488 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:27.488 01:17:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:27.488 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:27.488 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:27.488 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:27.488 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:27.488 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:27.488 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:27.488 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:27.488 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.488 01:17:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.426 [2024-07-14 01:17:17.670020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:28.426 [2024-07-14 01:17:17.670078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bebec0 with addr=10.0.0.2, port=8010 00:31:28.426 [2024-07-14 01:17:17.670112] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:28.426 [2024-07-14 01:17:17.670128] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:28.426 [2024-07-14 01:17:17.670165] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:29.366 [2024-07-14 01:17:18.672564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.366 [2024-07-14 01:17:18.672654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bebec0 with addr=10.0.0.2, port=8010 00:31:29.366 [2024-07-14 01:17:18.672706] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:29.366 [2024-07-14 01:17:18.672722] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:29.366 [2024-07-14 01:17:18.672735] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:30.305 [2024-07-14 01:17:19.674605] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:30.305 request: 00:31:30.305 { 00:31:30.305 "name": "nvme_second", 00:31:30.305 "trtype": "tcp", 00:31:30.305 "traddr": "10.0.0.2", 00:31:30.305 "adrfam": "ipv4", 00:31:30.305 "trsvcid": "8010", 00:31:30.305 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:30.305 "wait_for_attach": false, 00:31:30.305 "attach_timeout_ms": 3000, 00:31:30.305 "method": "bdev_nvme_start_discovery", 00:31:30.305 "req_id": 1 00:31:30.305 } 00:31:30.305 Got JSON-RPC error response 00:31:30.305 response: 00:31:30.305 { 00:31:30.305 "code": -110, 00:31:30.305 "message": "Connection timed out" 00:31:30.305 } 00:31:30.305 01:17:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:30.305 01:17:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:30.305 01:17:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:30.305 01:17:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:30.305 01:17:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:30.305 01:17:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:30.305 01:17:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:30.305 01:17:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:30.305 01:17:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.305 01:17:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:30.305 01:17:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:30.305 01:17:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:30.305 01:17:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1264178 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:30.564 rmmod nvme_tcp 00:31:30.564 rmmod nvme_fabrics 00:31:30.564 rmmod nvme_keyring 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1264039 ']' 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1264039 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1264039 ']' 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1264039 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1264039 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1264039' 00:31:30.564 killing process with pid 1264039 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1264039 00:31:30.564 01:17:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1264039 00:31:30.822 01:17:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:30.822 01:17:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:30.822 01:17:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:30.822 01:17:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:30.822 01:17:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:30.823 01:17:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.823 01:17:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:30.823 01:17:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.726 01:17:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:32.726 00:31:32.726 real 0m14.067s 00:31:32.726 user 0m20.913s 00:31:32.726 sys 0m2.783s 00:31:32.726 01:17:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:32.726 01:17:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:32.726 ************************************ 00:31:32.726 END TEST nvmf_host_discovery 00:31:32.726 ************************************ 00:31:32.985 01:17:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:32.985 01:17:22 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:32.985 01:17:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:32.985 01:17:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:32.985 01:17:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:32.985 ************************************ 00:31:32.985 START TEST nvmf_host_multipath_status 00:31:32.985 ************************************ 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:32.985 * Looking for test storage... 00:31:32.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:32.985 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:32.986 01:17:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:34.890 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:34.890 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:34.890 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:34.890 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:34.890 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:34.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:34.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:31:34.891 00:31:34.891 --- 10.0.0.2 ping statistics --- 00:31:34.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.891 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:31:34.891 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:34.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:34.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:31:34.891 00:31:34.891 --- 10.0.0.1 ping statistics --- 00:31:34.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:34.891 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:31:34.891 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:34.891 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:34.891 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:34.891 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:34.891 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:34.891 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:34.891 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:34.891 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:34.891 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:35.150 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:35.150 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:35.150 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:35.150 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:35.150 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1267344 00:31:35.150 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:35.150 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1267344 00:31:35.150 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1267344 ']' 00:31:35.150 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.150 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:35.150 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.150 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:35.150 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:35.150 [2024-07-14 01:17:24.360710] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:31:35.150 [2024-07-14 01:17:24.360784] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.150 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.150 [2024-07-14 01:17:24.422690] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:35.150 [2024-07-14 01:17:24.505770] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.150 [2024-07-14 01:17:24.505825] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.150 [2024-07-14 01:17:24.505848] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.150 [2024-07-14 01:17:24.505859] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.150 [2024-07-14 01:17:24.505876] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.150 [2024-07-14 01:17:24.505947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.150 [2024-07-14 01:17:24.505952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.408 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:35.408 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:35.408 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:35.408 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:35.408 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:35.408 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:35.408 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1267344 00:31:35.408 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:35.666 [2024-07-14 01:17:24.858891] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.666 01:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:35.923 Malloc0 00:31:35.923 01:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:36.181 01:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:36.439 01:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:36.696 [2024-07-14 01:17:25.886170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.696 01:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:36.954 [2024-07-14 01:17:26.134840] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:36.954 01:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1267510 00:31:36.954 01:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:36.954 01:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:36.954 01:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1267510 /var/tmp/bdevperf.sock 00:31:36.954 01:17:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1267510 ']' 00:31:36.954 01:17:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:36.954 01:17:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:36.954 01:17:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:36.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:36.955 01:17:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:36.955 01:17:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:37.212 01:17:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:37.212 01:17:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:37.212 01:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:37.470 01:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:38.038 Nvme0n1 00:31:38.038 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:38.298 Nvme0n1 00:31:38.298 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:38.298 01:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:40.234 01:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:40.234 01:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:40.492 01:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:41.058 01:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:41.991 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:41.991 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:41.991 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.991 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:42.251 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.251 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:42.251 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.251 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:42.509 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:42.509 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:42.509 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.509 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:42.509 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.509 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:42.509 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.509 01:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:42.766 01:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.766 01:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:42.766 01:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.766 01:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:43.023 01:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.023 01:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:43.023 01:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.023 01:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:43.280 01:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.281 01:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:43.281 01:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:43.538 01:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:43.798 01:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:45.174 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:45.174 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:45.174 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.174 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:45.174 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:45.174 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:45.174 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.174 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:45.431 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.431 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:45.431 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.431 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:45.689 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.689 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:45.689 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.689 01:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:45.947 01:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.947 01:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:45.947 01:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.947 01:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:46.205 01:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.205 01:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:46.205 01:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.205 01:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:46.464 01:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.464 01:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:46.464 01:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:46.722 01:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:46.981 01:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:47.916 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:47.916 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:47.916 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.916 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:48.174 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.174 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:48.174 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.174 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:48.432 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:48.432 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:48.432 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.432 01:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:48.690 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.691 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:48.691 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.691 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:48.949 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.949 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:48.949 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.949 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:49.207 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.207 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:49.207 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.207 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:49.466 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.466 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:49.466 01:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:49.725 01:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:49.984 01:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:50.923 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:50.923 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:50.923 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.923 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:51.181 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.181 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:51.181 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.181 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:51.439 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:51.439 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:51.440 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.440 01:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:51.698 01:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.698 01:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:51.698 01:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.698 01:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:51.956 01:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.956 01:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:51.956 01:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.956 01:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:52.215 01:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.215 01:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:52.215 01:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.215 01:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:52.473 01:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:52.473 01:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:52.473 01:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:52.731 01:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:52.995 01:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:53.992 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:53.992 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:53.992 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.992 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:54.251 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:54.251 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:54.251 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.251 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:54.508 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:54.508 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:54.508 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.508 01:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:54.791 01:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.791 01:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:54.791 01:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.791 01:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:55.049 01:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.049 01:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:55.049 01:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.049 01:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:55.306 01:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:55.306 01:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:55.306 01:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.306 01:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:55.565 01:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:55.565 01:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:55.565 01:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:55.823 01:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:56.081 01:17:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:57.015 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:57.015 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:57.015 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.015 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:57.273 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:57.273 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:57.273 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.273 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:57.531 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.531 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:57.531 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.531 01:17:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:57.788 01:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.788 01:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:57.788 01:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.788 01:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:58.046 01:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.046 01:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:58.046 01:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.046 01:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:58.304 01:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:58.304 01:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:58.304 01:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.304 01:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:58.562 01:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.562 01:17:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:58.820 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:58.820 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:59.078 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:59.336 01:17:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:32:00.272 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:32:00.272 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:00.272 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.272 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:00.529 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:00.529 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:00.529 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.529 01:17:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:00.787 01:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:00.787 01:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:00.787 01:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.787 01:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:01.045 01:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.045 01:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:01.045 01:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.045 01:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:01.302 01:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.302 01:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:01.302 01:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.302 01:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:01.560 01:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.560 01:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:01.560 01:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.560 01:17:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:01.818 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.818 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:32:01.818 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:02.076 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:02.334 01:17:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:32:03.268 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:32:03.268 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:03.268 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.268 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:03.526 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:03.527 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:03.527 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.527 01:17:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:03.783 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:03.783 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:03.783 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:03.783 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:04.040 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.040 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:04.040 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.040 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:04.297 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.297 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:04.297 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.297 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:04.555 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.555 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:04.555 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.555 01:17:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:04.812 01:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.812 01:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:04.812 01:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:05.069 01:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:05.327 01:17:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:06.262 01:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:06.262 01:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:06.262 01:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.262 01:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:06.568 01:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:06.568 01:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:06.568 01:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.568 01:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:06.826 01:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:06.826 01:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:06.826 01:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:06.826 01:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:07.084 01:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.084 01:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:07.084 01:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.084 01:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:07.342 01:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.342 01:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:07.342 01:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.342 01:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:07.600 01:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.600 01:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:07.600 01:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:07.600 01:17:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:07.858 01:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:07.858 01:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:07.858 01:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:08.117 01:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:08.375 01:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:09.310 01:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:09.310 01:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:09.310 01:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.310 01:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:09.568 01:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:09.568 01:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:09.568 01:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.568 01:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:09.826 01:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:09.826 01:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:09.826 01:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:09.827 01:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:10.085 01:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.085 01:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:10.086 01:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.086 01:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:10.344 01:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.344 01:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:10.344 01:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.344 01:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:10.604 01:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:10.604 01:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:10.604 01:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:10.604 01:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:10.862 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:10.862 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1267510 00:32:10.862 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1267510 ']' 00:32:10.862 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1267510 00:32:10.862 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:32:10.862 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:10.862 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1267510 00:32:10.862 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:32:10.862 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:32:10.862 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1267510' 00:32:10.862 killing process with pid 1267510 00:32:10.863 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1267510 00:32:10.863 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1267510 00:32:11.125 Connection closed with partial response: 00:32:11.125 00:32:11.125 00:32:11.125 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1267510 00:32:11.125 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:11.125 [2024-07-14 01:17:26.193472] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:32:11.125 [2024-07-14 01:17:26.193575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1267510 ] 00:32:11.125 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.125 [2024-07-14 01:17:26.257085] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.125 [2024-07-14 01:17:26.344273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:11.125 Running I/O for 90 seconds... 00:32:11.125 [2024-07-14 01:17:42.065601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.065658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.065736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.065756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.065780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.065798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.065944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.065964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.065988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.066005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.066028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.066045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.066068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.066086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.066108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.066218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.066258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.066275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.066297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.066316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.066337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.066374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.066412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.066428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.066449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.066465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.066486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.066502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.066523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.066539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.066559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.066575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.066596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.066614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.066636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.066652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.066673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.066688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.066709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.066725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.066745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.066777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.066978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.067001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.067024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.067046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.067071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.067090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.067348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.067377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.067412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.067432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.067468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.067486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.067511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.067528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.067553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.067571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.067596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.067613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.067638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.067671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.067697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.067723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.067761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.067778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:11.125 [2024-07-14 01:17:42.067802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.125 [2024-07-14 01:17:42.067818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.067842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.067894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.067928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.067946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.067970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.067988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.068012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.068029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.068053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.068069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.068094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.068112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.068374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.068396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.068422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.068454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.068479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.068496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.068519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.068536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.068559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.068576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.068599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.068616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.068729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.068749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.068778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.068797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.068821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.068850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.068908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.068929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.068969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.068987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.069027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.069069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.069110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.069161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.069217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.126 [2024-07-14 01:17:42.069257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.126 [2024-07-14 01:17:42.069298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.126 [2024-07-14 01:17:42.069346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.126 [2024-07-14 01:17:42.069399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.126 [2024-07-14 01:17:42.069441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.126 [2024-07-14 01:17:42.069481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.126 [2024-07-14 01:17:42.069521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.126 [2024-07-14 01:17:42.069572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.126 [2024-07-14 01:17:42.069612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.126 [2024-07-14 01:17:42.069653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.126 [2024-07-14 01:17:42.069693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.126 [2024-07-14 01:17:42.069733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.126 [2024-07-14 01:17:42.069775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.126 [2024-07-14 01:17:42.069814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.069838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.126 [2024-07-14 01:17:42.069855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.070029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.126 [2024-07-14 01:17:42.070071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.070106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.126 [2024-07-14 01:17:42.070126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.070155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.126 [2024-07-14 01:17:42.070187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:11.126 [2024-07-14 01:17:42.070217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.126 [2024-07-14 01:17:42.070234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.070261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.070278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.070313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.070330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.070383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.070400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.070427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.070443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.070470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.070486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.070513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.070541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.070568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.070584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.070611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.070628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.070655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.070675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.070703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.070719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.070746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.070763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.070790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.070807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.070833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.070873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.070904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.070938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.070967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.070984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.071981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.071998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.072026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.072043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.072071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.072088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.072115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.072132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.072160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.072191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.072219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.127 [2024-07-14 01:17:42.072236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:11.127 [2024-07-14 01:17:42.072263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:42.072279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:42.072306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:42.072323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:42.072349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:42.072366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:42.072392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:42.072408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:42.072435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:42.072455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:42.072493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:42.072510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:42.072537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:42.072554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:42.072581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:42.072597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:42.072624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:42.072640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:42.072667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:42.072684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:42.072710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:42.072727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:42.072754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:42.072771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.621980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:57.622045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:57.622139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:57.622184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.128 [2024-07-14 01:17:57.622240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.128 [2024-07-14 01:17:57.622292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:27840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.128 [2024-07-14 01:17:57.622333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.128 [2024-07-14 01:17:57.622372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:27872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.128 [2024-07-14 01:17:57.622408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.128 [2024-07-14 01:17:57.622446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:57.622483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:57.622520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:57.622558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.128 [2024-07-14 01:17:57.622596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.128 [2024-07-14 01:17:57.622633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.128 [2024-07-14 01:17:57.622671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.128 [2024-07-14 01:17:57.622708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.128 [2024-07-14 01:17:57.622749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.128 [2024-07-14 01:17:57.622789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:57.622826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:57.622873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:57.622932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:57.622971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.622994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:57.623011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.623034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.128 [2024-07-14 01:17:57.623050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.623072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.128 [2024-07-14 01:17:57.623090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.623112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.128 [2024-07-14 01:17:57.623129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.623159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.128 [2024-07-14 01:17:57.623175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.623212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.128 [2024-07-14 01:17:57.623229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.623251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.128 [2024-07-14 01:17:57.623268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.624939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.128 [2024-07-14 01:17:57.624968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:11.128 [2024-07-14 01:17:57.624997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:27576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.129 [2024-07-14 01:17:57.625017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.625041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.129 [2024-07-14 01:17:57.625059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.625097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.129 [2024-07-14 01:17:57.625115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.625252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.129 [2024-07-14 01:17:57.625363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.625392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.129 [2024-07-14 01:17:57.625410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.625433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:28160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.129 [2024-07-14 01:17:57.625467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.625490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:28176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.129 [2024-07-14 01:17:57.625507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.625529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.129 [2024-07-14 01:17:57.625547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.625569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.129 [2024-07-14 01:17:57.625586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.625624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.129 [2024-07-14 01:17:57.625640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.625662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.129 [2024-07-14 01:17:57.625678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.625705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.129 [2024-07-14 01:17:57.625723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.625744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.129 [2024-07-14 01:17:57.625761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.625782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.129 [2024-07-14 01:17:57.625799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.625820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.129 [2024-07-14 01:17:57.625837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.625892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.129 [2024-07-14 01:17:57.625911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.625935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.129 [2024-07-14 01:17:57.625951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.625974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:28192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.129 [2024-07-14 01:17:57.625991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.626012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:28208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.129 [2024-07-14 01:17:57.626029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.626297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.129 [2024-07-14 01:17:57.626321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.626347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.129 [2024-07-14 01:17:57.626471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.626571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:11.129 [2024-07-14 01:17:57.626594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.626619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.129 [2024-07-14 01:17:57.626637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.626660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:28256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.129 [2024-07-14 01:17:57.626682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.626706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.129 [2024-07-14 01:17:57.626724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.626763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.129 [2024-07-14 01:17:57.626780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.626818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:28304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.129 [2024-07-14 01:17:57.626835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.626856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.129 [2024-07-14 01:17:57.626901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:11.129 [2024-07-14 01:17:57.626926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:11.129 [2024-07-14 01:17:57.626944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:11.129 Received shutdown signal, test time was about 32.419849 seconds 00:32:11.129 00:32:11.129 Latency(us) 00:32:11.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.129 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:11.129 Verification LBA range: start 0x0 length 0x4000 00:32:11.129 Nvme0n1 : 32.42 7283.77 28.45 0.00 0.00 17541.00 1037.65 4026531.84 00:32:11.129 =================================================================================================================== 00:32:11.129 Total : 7283.77 28.45 0.00 0.00 17541.00 1037.65 4026531.84 00:32:11.129 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:11.388 rmmod nvme_tcp 00:32:11.388 rmmod nvme_fabrics 00:32:11.388 rmmod nvme_keyring 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1267344 ']' 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1267344 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1267344 ']' 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1267344 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1267344 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1267344' 00:32:11.388 killing process with pid 1267344 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1267344 00:32:11.388 01:18:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1267344 00:32:11.647 01:18:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:11.647 01:18:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:11.647 01:18:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:11.647 01:18:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:11.647 01:18:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:11.647 01:18:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.647 01:18:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:11.647 01:18:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.180 01:18:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:14.180 00:32:14.180 real 0m40.868s 00:32:14.180 user 1m54.831s 00:32:14.180 sys 0m13.496s 00:32:14.180 01:18:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:14.180 01:18:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:14.180 ************************************ 00:32:14.180 END TEST nvmf_host_multipath_status 00:32:14.180 ************************************ 00:32:14.180 01:18:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:14.180 01:18:03 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:14.180 01:18:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:14.180 01:18:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:14.180 01:18:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:14.180 ************************************ 00:32:14.180 START TEST nvmf_discovery_remove_ifc 00:32:14.180 ************************************ 00:32:14.180 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:14.180 * Looking for test storage... 00:32:14.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:14.180 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:14.180 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:14.181 01:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:16.082 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:16.083 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:16.083 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:16.083 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:16.083 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:16.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:16.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:32:16.083 00:32:16.083 --- 10.0.0.2 ping statistics --- 00:32:16.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:16.083 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:16.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:16.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:32:16.083 00:32:16.083 --- 10.0.0.1 ping statistics --- 00:32:16.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:16.083 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1273803 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1273803 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1273803 ']' 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:16.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:16.083 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.083 [2024-07-14 01:18:05.342685] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:32:16.083 [2024-07-14 01:18:05.342777] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:16.083 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.083 [2024-07-14 01:18:05.417229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.342 [2024-07-14 01:18:05.512297] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:16.342 [2024-07-14 01:18:05.512345] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:16.342 [2024-07-14 01:18:05.512365] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:16.342 [2024-07-14 01:18:05.512376] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:16.342 [2024-07-14 01:18:05.512386] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:16.342 [2024-07-14 01:18:05.512412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.342 [2024-07-14 01:18:05.659819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:16.342 [2024-07-14 01:18:05.668044] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:16.342 null0 00:32:16.342 [2024-07-14 01:18:05.699963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1273925 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1273925 /tmp/host.sock 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1273925 ']' 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:16.342 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:16.342 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.602 [2024-07-14 01:18:05.765987] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:32:16.602 [2024-07-14 01:18:05.766063] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273925 ] 00:32:16.602 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.602 [2024-07-14 01:18:05.825287] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.602 [2024-07-14 01:18:05.912322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.602 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:16.602 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:32:16.602 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:16.602 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:16.602 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.602 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.602 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.602 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:16.602 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.602 01:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.863 01:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.863 01:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:16.863 01:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.863 01:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:17.799 [2024-07-14 01:18:07.139801] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:17.799 [2024-07-14 01:18:07.139843] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:17.799 [2024-07-14 01:18:07.139876] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:18.058 [2024-07-14 01:18:07.267310] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:18.058 [2024-07-14 01:18:07.410499] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:18.058 [2024-07-14 01:18:07.410569] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:18.058 [2024-07-14 01:18:07.410616] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:18.058 [2024-07-14 01:18:07.410645] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:18.058 [2024-07-14 01:18:07.410689] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:18.058 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.058 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:18.058 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:18.058 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:18.058 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:18.058 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.058 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:18.058 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:18.058 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:18.058 [2024-07-14 01:18:07.416408] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c10300 was disconnected and freed. delete nvme_qpair. 00:32:18.058 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.058 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:18.058 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:18.058 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:18.318 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:18.318 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:18.318 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:18.318 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.318 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:18.318 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:18.318 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:18.318 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:18.318 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.318 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:18.318 01:18:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:19.257 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:19.257 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:19.257 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:19.257 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.257 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:19.257 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:19.257 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:19.257 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.257 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:19.257 01:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:20.190 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:20.191 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:20.191 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:20.191 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.191 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:20.191 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:20.191 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:20.450 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.450 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:20.450 01:18:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:21.410 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:21.410 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:21.410 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:21.410 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.410 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:21.411 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:21.411 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:21.411 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.411 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:21.411 01:18:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:22.352 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:22.352 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:22.352 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:22.352 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.352 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:22.352 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:22.352 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:22.352 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.352 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:22.352 01:18:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:23.732 01:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:23.732 01:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:23.732 01:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:23.732 01:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.732 01:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.732 01:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:23.732 01:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:23.732 01:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.732 01:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:23.732 01:18:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:23.732 [2024-07-14 01:18:12.851591] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:23.732 [2024-07-14 01:18:12.851663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.732 [2024-07-14 01:18:12.851684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.732 [2024-07-14 01:18:12.851714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.732 [2024-07-14 01:18:12.851727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.732 [2024-07-14 01:18:12.851740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.732 [2024-07-14 01:18:12.851752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.732 [2024-07-14 01:18:12.851776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.732 [2024-07-14 01:18:12.851789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.732 [2024-07-14 01:18:12.851802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.732 [2024-07-14 01:18:12.851815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.732 [2024-07-14 01:18:12.851827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6b40 is same with the state(5) to be set 00:32:23.732 [2024-07-14 01:18:12.861607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd6b40 (9): Bad file descriptor 00:32:23.732 [2024-07-14 01:18:12.871654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:24.666 01:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:24.666 01:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:24.666 01:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:24.666 01:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.666 01:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:24.666 01:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:24.666 01:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:24.666 [2024-07-14 01:18:13.917904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:24.666 [2024-07-14 01:18:13.917976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bd6b40 with addr=10.0.0.2, port=4420 00:32:24.666 [2024-07-14 01:18:13.918007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd6b40 is same with the state(5) to be set 00:32:24.666 [2024-07-14 01:18:13.918072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd6b40 (9): Bad file descriptor 00:32:24.666 [2024-07-14 01:18:13.918603] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:24.666 [2024-07-14 01:18:13.918640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:24.666 [2024-07-14 01:18:13.918658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:24.666 [2024-07-14 01:18:13.918678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:24.666 [2024-07-14 01:18:13.918716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:24.666 [2024-07-14 01:18:13.918737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:24.666 01:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.666 01:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:24.666 01:18:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:25.600 [2024-07-14 01:18:14.921239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:25.600 [2024-07-14 01:18:14.921272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:25.600 [2024-07-14 01:18:14.921295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:25.600 [2024-07-14 01:18:14.921310] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:25.600 [2024-07-14 01:18:14.921333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:25.600 [2024-07-14 01:18:14.921376] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:25.600 [2024-07-14 01:18:14.921418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.600 [2024-07-14 01:18:14.921443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.600 [2024-07-14 01:18:14.921464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.600 [2024-07-14 01:18:14.921480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.600 [2024-07-14 01:18:14.921496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.600 [2024-07-14 01:18:14.921511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.600 [2024-07-14 01:18:14.921536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.600 [2024-07-14 01:18:14.921551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.600 [2024-07-14 01:18:14.921567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:25.600 [2024-07-14 01:18:14.921582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:25.600 [2024-07-14 01:18:14.921596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:25.600 [2024-07-14 01:18:14.921698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd5f80 (9): Bad file descriptor 00:32:25.600 [2024-07-14 01:18:14.922728] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:25.600 [2024-07-14 01:18:14.922754] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:25.600 01:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:25.600 01:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:25.600 01:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:25.600 01:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.600 01:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:25.600 01:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:25.600 01:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:25.600 01:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.600 01:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:25.600 01:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:25.600 01:18:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:25.600 01:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:25.600 01:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:25.600 01:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:25.600 01:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.600 01:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:25.600 01:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:25.600 01:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:25.600 01:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:25.860 01:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.860 01:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:25.860 01:18:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:26.799 01:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:26.799 01:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:26.799 01:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:26.799 01:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.799 01:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:26.799 01:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:26.799 01:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:26.799 01:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.799 01:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:26.799 01:18:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:27.734 [2024-07-14 01:18:16.975066] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:27.734 [2024-07-14 01:18:16.975093] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:27.734 [2024-07-14 01:18:16.975119] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:27.734 [2024-07-14 01:18:17.061409] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:27.734 01:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:27.734 01:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:27.734 01:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:27.734 01:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.734 01:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:27.734 01:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:27.734 01:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:27.734 01:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.734 [2024-07-14 01:18:17.124243] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:27.734 [2024-07-14 01:18:17.124292] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:27.734 [2024-07-14 01:18:17.124325] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:27.734 [2024-07-14 01:18:17.124349] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:27.734 [2024-07-14 01:18:17.124363] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:27.734 [2024-07-14 01:18:17.132472] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1bed3f0 was disconnected and freed. delete nvme_qpair. 00:32:27.734 01:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:27.734 01:18:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1273925 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1273925 ']' 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1273925 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1273925 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1273925' 00:32:29.120 killing process with pid 1273925 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1273925 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1273925 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:29.120 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:29.121 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:29.121 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:29.121 rmmod nvme_tcp 00:32:29.121 rmmod nvme_fabrics 00:32:29.121 rmmod nvme_keyring 00:32:29.121 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:29.121 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:29.121 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:29.121 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1273803 ']' 00:32:29.121 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1273803 00:32:29.121 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1273803 ']' 00:32:29.121 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1273803 00:32:29.121 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:29.121 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:29.121 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1273803 00:32:29.121 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:29.121 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:29.121 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1273803' 00:32:29.121 killing process with pid 1273803 00:32:29.121 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1273803 00:32:29.121 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1273803 00:32:29.379 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:29.379 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:29.379 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:29.379 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:29.379 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:29.379 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.379 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:29.379 01:18:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.919 01:18:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:31.919 00:32:31.919 real 0m17.665s 00:32:31.919 user 0m25.625s 00:32:31.919 sys 0m2.990s 00:32:31.919 01:18:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:31.919 01:18:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:31.919 ************************************ 00:32:31.919 END TEST nvmf_discovery_remove_ifc 00:32:31.919 ************************************ 00:32:31.919 01:18:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:31.919 01:18:20 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:31.919 01:18:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:31.919 01:18:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:31.919 01:18:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:31.919 ************************************ 00:32:31.919 START TEST nvmf_identify_kernel_target 00:32:31.919 ************************************ 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:31.919 * Looking for test storage... 00:32:31.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:31.919 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:31.920 01:18:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:33.822 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:33.822 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:33.822 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:33.822 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:33.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:33.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:32:33.822 00:32:33.822 --- 10.0.0.2 ping statistics --- 00:32:33.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.822 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:33.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:33.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:32:33.822 00:32:33.822 --- 10.0.0.1 ping statistics --- 00:32:33.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.822 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:33.822 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:33.823 01:18:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:34.763 Waiting for block devices as requested 00:32:34.763 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:34.763 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:35.023 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:35.023 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:35.023 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:35.283 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:35.283 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:35.283 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:35.283 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:35.543 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:35.543 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:35.543 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:35.543 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:35.804 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:35.804 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:35.804 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:35.804 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:36.064 No valid GPT data, bailing 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:36.064 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:36.326 00:32:36.326 Discovery Log Number of Records 2, Generation counter 2 00:32:36.326 =====Discovery Log Entry 0====== 00:32:36.326 trtype: tcp 00:32:36.326 adrfam: ipv4 00:32:36.326 subtype: current discovery subsystem 00:32:36.326 treq: not specified, sq flow control disable supported 00:32:36.326 portid: 1 00:32:36.326 trsvcid: 4420 00:32:36.326 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:36.326 traddr: 10.0.0.1 00:32:36.326 eflags: none 00:32:36.326 sectype: none 00:32:36.326 =====Discovery Log Entry 1====== 00:32:36.326 trtype: tcp 00:32:36.326 adrfam: ipv4 00:32:36.326 subtype: nvme subsystem 00:32:36.326 treq: not specified, sq flow control disable supported 00:32:36.326 portid: 1 00:32:36.326 trsvcid: 4420 00:32:36.326 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:36.326 traddr: 10.0.0.1 00:32:36.326 eflags: none 00:32:36.326 sectype: none 00:32:36.326 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:36.326 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:36.326 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.326 ===================================================== 00:32:36.326 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:36.326 ===================================================== 00:32:36.326 Controller Capabilities/Features 00:32:36.326 ================================ 00:32:36.326 Vendor ID: 0000 00:32:36.326 Subsystem Vendor ID: 0000 00:32:36.326 Serial Number: 2c7ccb6e0f02534affe2 00:32:36.326 Model Number: Linux 00:32:36.326 Firmware Version: 6.7.0-68 00:32:36.326 Recommended Arb Burst: 0 00:32:36.326 IEEE OUI Identifier: 00 00 00 00:32:36.326 Multi-path I/O 00:32:36.326 May have multiple subsystem ports: No 00:32:36.326 May have multiple controllers: No 00:32:36.326 Associated with SR-IOV VF: No 00:32:36.326 Max Data Transfer Size: Unlimited 00:32:36.326 Max Number of Namespaces: 0 00:32:36.326 Max Number of I/O Queues: 1024 00:32:36.326 NVMe Specification Version (VS): 1.3 00:32:36.326 NVMe Specification Version (Identify): 1.3 00:32:36.326 Maximum Queue Entries: 1024 00:32:36.326 Contiguous Queues Required: No 00:32:36.326 Arbitration Mechanisms Supported 00:32:36.326 Weighted Round Robin: Not Supported 00:32:36.326 Vendor Specific: Not Supported 00:32:36.326 Reset Timeout: 7500 ms 00:32:36.326 Doorbell Stride: 4 bytes 00:32:36.326 NVM Subsystem Reset: Not Supported 00:32:36.326 Command Sets Supported 00:32:36.326 NVM Command Set: Supported 00:32:36.326 Boot Partition: Not Supported 00:32:36.326 Memory Page Size Minimum: 4096 bytes 00:32:36.326 Memory Page Size Maximum: 4096 bytes 00:32:36.326 Persistent Memory Region: Not Supported 00:32:36.326 Optional Asynchronous Events Supported 00:32:36.326 Namespace Attribute Notices: Not Supported 00:32:36.326 Firmware Activation Notices: Not Supported 00:32:36.326 ANA Change Notices: Not Supported 00:32:36.326 PLE Aggregate Log Change Notices: Not Supported 00:32:36.326 LBA Status Info Alert Notices: Not Supported 00:32:36.326 EGE Aggregate Log Change Notices: Not Supported 00:32:36.326 Normal NVM Subsystem Shutdown event: Not Supported 00:32:36.326 Zone Descriptor Change Notices: Not Supported 00:32:36.326 Discovery Log Change Notices: Supported 00:32:36.326 Controller Attributes 00:32:36.326 128-bit Host Identifier: Not Supported 00:32:36.326 Non-Operational Permissive Mode: Not Supported 00:32:36.326 NVM Sets: Not Supported 00:32:36.326 Read Recovery Levels: Not Supported 00:32:36.326 Endurance Groups: Not Supported 00:32:36.326 Predictable Latency Mode: Not Supported 00:32:36.326 Traffic Based Keep ALive: Not Supported 00:32:36.326 Namespace Granularity: Not Supported 00:32:36.326 SQ Associations: Not Supported 00:32:36.326 UUID List: Not Supported 00:32:36.326 Multi-Domain Subsystem: Not Supported 00:32:36.326 Fixed Capacity Management: Not Supported 00:32:36.326 Variable Capacity Management: Not Supported 00:32:36.326 Delete Endurance Group: Not Supported 00:32:36.326 Delete NVM Set: Not Supported 00:32:36.326 Extended LBA Formats Supported: Not Supported 00:32:36.326 Flexible Data Placement Supported: Not Supported 00:32:36.326 00:32:36.326 Controller Memory Buffer Support 00:32:36.326 ================================ 00:32:36.326 Supported: No 00:32:36.326 00:32:36.326 Persistent Memory Region Support 00:32:36.326 ================================ 00:32:36.326 Supported: No 00:32:36.326 00:32:36.326 Admin Command Set Attributes 00:32:36.326 ============================ 00:32:36.326 Security Send/Receive: Not Supported 00:32:36.326 Format NVM: Not Supported 00:32:36.326 Firmware Activate/Download: Not Supported 00:32:36.326 Namespace Management: Not Supported 00:32:36.326 Device Self-Test: Not Supported 00:32:36.326 Directives: Not Supported 00:32:36.326 NVMe-MI: Not Supported 00:32:36.326 Virtualization Management: Not Supported 00:32:36.326 Doorbell Buffer Config: Not Supported 00:32:36.326 Get LBA Status Capability: Not Supported 00:32:36.326 Command & Feature Lockdown Capability: Not Supported 00:32:36.326 Abort Command Limit: 1 00:32:36.326 Async Event Request Limit: 1 00:32:36.326 Number of Firmware Slots: N/A 00:32:36.326 Firmware Slot 1 Read-Only: N/A 00:32:36.326 Firmware Activation Without Reset: N/A 00:32:36.326 Multiple Update Detection Support: N/A 00:32:36.326 Firmware Update Granularity: No Information Provided 00:32:36.326 Per-Namespace SMART Log: No 00:32:36.326 Asymmetric Namespace Access Log Page: Not Supported 00:32:36.326 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:36.326 Command Effects Log Page: Not Supported 00:32:36.326 Get Log Page Extended Data: Supported 00:32:36.326 Telemetry Log Pages: Not Supported 00:32:36.326 Persistent Event Log Pages: Not Supported 00:32:36.326 Supported Log Pages Log Page: May Support 00:32:36.326 Commands Supported & Effects Log Page: Not Supported 00:32:36.326 Feature Identifiers & Effects Log Page:May Support 00:32:36.326 NVMe-MI Commands & Effects Log Page: May Support 00:32:36.326 Data Area 4 for Telemetry Log: Not Supported 00:32:36.326 Error Log Page Entries Supported: 1 00:32:36.326 Keep Alive: Not Supported 00:32:36.326 00:32:36.326 NVM Command Set Attributes 00:32:36.326 ========================== 00:32:36.326 Submission Queue Entry Size 00:32:36.326 Max: 1 00:32:36.326 Min: 1 00:32:36.326 Completion Queue Entry Size 00:32:36.326 Max: 1 00:32:36.326 Min: 1 00:32:36.326 Number of Namespaces: 0 00:32:36.326 Compare Command: Not Supported 00:32:36.326 Write Uncorrectable Command: Not Supported 00:32:36.326 Dataset Management Command: Not Supported 00:32:36.326 Write Zeroes Command: Not Supported 00:32:36.326 Set Features Save Field: Not Supported 00:32:36.326 Reservations: Not Supported 00:32:36.326 Timestamp: Not Supported 00:32:36.326 Copy: Not Supported 00:32:36.326 Volatile Write Cache: Not Present 00:32:36.326 Atomic Write Unit (Normal): 1 00:32:36.326 Atomic Write Unit (PFail): 1 00:32:36.326 Atomic Compare & Write Unit: 1 00:32:36.326 Fused Compare & Write: Not Supported 00:32:36.326 Scatter-Gather List 00:32:36.326 SGL Command Set: Supported 00:32:36.326 SGL Keyed: Not Supported 00:32:36.326 SGL Bit Bucket Descriptor: Not Supported 00:32:36.326 SGL Metadata Pointer: Not Supported 00:32:36.326 Oversized SGL: Not Supported 00:32:36.327 SGL Metadata Address: Not Supported 00:32:36.327 SGL Offset: Supported 00:32:36.327 Transport SGL Data Block: Not Supported 00:32:36.327 Replay Protected Memory Block: Not Supported 00:32:36.327 00:32:36.327 Firmware Slot Information 00:32:36.327 ========================= 00:32:36.327 Active slot: 0 00:32:36.327 00:32:36.327 00:32:36.327 Error Log 00:32:36.327 ========= 00:32:36.327 00:32:36.327 Active Namespaces 00:32:36.327 ================= 00:32:36.327 Discovery Log Page 00:32:36.327 ================== 00:32:36.327 Generation Counter: 2 00:32:36.327 Number of Records: 2 00:32:36.327 Record Format: 0 00:32:36.327 00:32:36.327 Discovery Log Entry 0 00:32:36.327 ---------------------- 00:32:36.327 Transport Type: 3 (TCP) 00:32:36.327 Address Family: 1 (IPv4) 00:32:36.327 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:36.327 Entry Flags: 00:32:36.327 Duplicate Returned Information: 0 00:32:36.327 Explicit Persistent Connection Support for Discovery: 0 00:32:36.327 Transport Requirements: 00:32:36.327 Secure Channel: Not Specified 00:32:36.327 Port ID: 1 (0x0001) 00:32:36.327 Controller ID: 65535 (0xffff) 00:32:36.327 Admin Max SQ Size: 32 00:32:36.327 Transport Service Identifier: 4420 00:32:36.327 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:36.327 Transport Address: 10.0.0.1 00:32:36.327 Discovery Log Entry 1 00:32:36.327 ---------------------- 00:32:36.327 Transport Type: 3 (TCP) 00:32:36.327 Address Family: 1 (IPv4) 00:32:36.327 Subsystem Type: 2 (NVM Subsystem) 00:32:36.327 Entry Flags: 00:32:36.327 Duplicate Returned Information: 0 00:32:36.327 Explicit Persistent Connection Support for Discovery: 0 00:32:36.327 Transport Requirements: 00:32:36.327 Secure Channel: Not Specified 00:32:36.327 Port ID: 1 (0x0001) 00:32:36.327 Controller ID: 65535 (0xffff) 00:32:36.327 Admin Max SQ Size: 32 00:32:36.327 Transport Service Identifier: 4420 00:32:36.327 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:36.327 Transport Address: 10.0.0.1 00:32:36.327 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:36.327 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.327 get_feature(0x01) failed 00:32:36.327 get_feature(0x02) failed 00:32:36.327 get_feature(0x04) failed 00:32:36.327 ===================================================== 00:32:36.327 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:36.327 ===================================================== 00:32:36.327 Controller Capabilities/Features 00:32:36.327 ================================ 00:32:36.327 Vendor ID: 0000 00:32:36.327 Subsystem Vendor ID: 0000 00:32:36.327 Serial Number: 5ebad6b9c477bb360a89 00:32:36.327 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:36.327 Firmware Version: 6.7.0-68 00:32:36.327 Recommended Arb Burst: 6 00:32:36.327 IEEE OUI Identifier: 00 00 00 00:32:36.327 Multi-path I/O 00:32:36.327 May have multiple subsystem ports: Yes 00:32:36.327 May have multiple controllers: Yes 00:32:36.327 Associated with SR-IOV VF: No 00:32:36.327 Max Data Transfer Size: Unlimited 00:32:36.327 Max Number of Namespaces: 1024 00:32:36.327 Max Number of I/O Queues: 128 00:32:36.327 NVMe Specification Version (VS): 1.3 00:32:36.327 NVMe Specification Version (Identify): 1.3 00:32:36.327 Maximum Queue Entries: 1024 00:32:36.327 Contiguous Queues Required: No 00:32:36.327 Arbitration Mechanisms Supported 00:32:36.327 Weighted Round Robin: Not Supported 00:32:36.327 Vendor Specific: Not Supported 00:32:36.327 Reset Timeout: 7500 ms 00:32:36.327 Doorbell Stride: 4 bytes 00:32:36.327 NVM Subsystem Reset: Not Supported 00:32:36.327 Command Sets Supported 00:32:36.327 NVM Command Set: Supported 00:32:36.327 Boot Partition: Not Supported 00:32:36.327 Memory Page Size Minimum: 4096 bytes 00:32:36.327 Memory Page Size Maximum: 4096 bytes 00:32:36.327 Persistent Memory Region: Not Supported 00:32:36.327 Optional Asynchronous Events Supported 00:32:36.327 Namespace Attribute Notices: Supported 00:32:36.327 Firmware Activation Notices: Not Supported 00:32:36.327 ANA Change Notices: Supported 00:32:36.327 PLE Aggregate Log Change Notices: Not Supported 00:32:36.327 LBA Status Info Alert Notices: Not Supported 00:32:36.327 EGE Aggregate Log Change Notices: Not Supported 00:32:36.327 Normal NVM Subsystem Shutdown event: Not Supported 00:32:36.327 Zone Descriptor Change Notices: Not Supported 00:32:36.327 Discovery Log Change Notices: Not Supported 00:32:36.327 Controller Attributes 00:32:36.327 128-bit Host Identifier: Supported 00:32:36.327 Non-Operational Permissive Mode: Not Supported 00:32:36.327 NVM Sets: Not Supported 00:32:36.327 Read Recovery Levels: Not Supported 00:32:36.327 Endurance Groups: Not Supported 00:32:36.327 Predictable Latency Mode: Not Supported 00:32:36.327 Traffic Based Keep ALive: Supported 00:32:36.327 Namespace Granularity: Not Supported 00:32:36.327 SQ Associations: Not Supported 00:32:36.327 UUID List: Not Supported 00:32:36.327 Multi-Domain Subsystem: Not Supported 00:32:36.327 Fixed Capacity Management: Not Supported 00:32:36.327 Variable Capacity Management: Not Supported 00:32:36.327 Delete Endurance Group: Not Supported 00:32:36.327 Delete NVM Set: Not Supported 00:32:36.327 Extended LBA Formats Supported: Not Supported 00:32:36.327 Flexible Data Placement Supported: Not Supported 00:32:36.327 00:32:36.327 Controller Memory Buffer Support 00:32:36.327 ================================ 00:32:36.327 Supported: No 00:32:36.327 00:32:36.327 Persistent Memory Region Support 00:32:36.327 ================================ 00:32:36.327 Supported: No 00:32:36.327 00:32:36.327 Admin Command Set Attributes 00:32:36.327 ============================ 00:32:36.327 Security Send/Receive: Not Supported 00:32:36.327 Format NVM: Not Supported 00:32:36.327 Firmware Activate/Download: Not Supported 00:32:36.327 Namespace Management: Not Supported 00:32:36.327 Device Self-Test: Not Supported 00:32:36.327 Directives: Not Supported 00:32:36.327 NVMe-MI: Not Supported 00:32:36.327 Virtualization Management: Not Supported 00:32:36.327 Doorbell Buffer Config: Not Supported 00:32:36.327 Get LBA Status Capability: Not Supported 00:32:36.327 Command & Feature Lockdown Capability: Not Supported 00:32:36.327 Abort Command Limit: 4 00:32:36.327 Async Event Request Limit: 4 00:32:36.327 Number of Firmware Slots: N/A 00:32:36.327 Firmware Slot 1 Read-Only: N/A 00:32:36.327 Firmware Activation Without Reset: N/A 00:32:36.327 Multiple Update Detection Support: N/A 00:32:36.327 Firmware Update Granularity: No Information Provided 00:32:36.327 Per-Namespace SMART Log: Yes 00:32:36.327 Asymmetric Namespace Access Log Page: Supported 00:32:36.327 ANA Transition Time : 10 sec 00:32:36.327 00:32:36.327 Asymmetric Namespace Access Capabilities 00:32:36.327 ANA Optimized State : Supported 00:32:36.327 ANA Non-Optimized State : Supported 00:32:36.327 ANA Inaccessible State : Supported 00:32:36.327 ANA Persistent Loss State : Supported 00:32:36.327 ANA Change State : Supported 00:32:36.327 ANAGRPID is not changed : No 00:32:36.327 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:36.327 00:32:36.327 ANA Group Identifier Maximum : 128 00:32:36.327 Number of ANA Group Identifiers : 128 00:32:36.327 Max Number of Allowed Namespaces : 1024 00:32:36.327 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:36.327 Command Effects Log Page: Supported 00:32:36.327 Get Log Page Extended Data: Supported 00:32:36.327 Telemetry Log Pages: Not Supported 00:32:36.327 Persistent Event Log Pages: Not Supported 00:32:36.327 Supported Log Pages Log Page: May Support 00:32:36.327 Commands Supported & Effects Log Page: Not Supported 00:32:36.327 Feature Identifiers & Effects Log Page:May Support 00:32:36.327 NVMe-MI Commands & Effects Log Page: May Support 00:32:36.327 Data Area 4 for Telemetry Log: Not Supported 00:32:36.327 Error Log Page Entries Supported: 128 00:32:36.327 Keep Alive: Supported 00:32:36.327 Keep Alive Granularity: 1000 ms 00:32:36.327 00:32:36.327 NVM Command Set Attributes 00:32:36.327 ========================== 00:32:36.327 Submission Queue Entry Size 00:32:36.327 Max: 64 00:32:36.327 Min: 64 00:32:36.327 Completion Queue Entry Size 00:32:36.327 Max: 16 00:32:36.327 Min: 16 00:32:36.327 Number of Namespaces: 1024 00:32:36.327 Compare Command: Not Supported 00:32:36.327 Write Uncorrectable Command: Not Supported 00:32:36.327 Dataset Management Command: Supported 00:32:36.327 Write Zeroes Command: Supported 00:32:36.327 Set Features Save Field: Not Supported 00:32:36.327 Reservations: Not Supported 00:32:36.327 Timestamp: Not Supported 00:32:36.327 Copy: Not Supported 00:32:36.327 Volatile Write Cache: Present 00:32:36.327 Atomic Write Unit (Normal): 1 00:32:36.327 Atomic Write Unit (PFail): 1 00:32:36.327 Atomic Compare & Write Unit: 1 00:32:36.327 Fused Compare & Write: Not Supported 00:32:36.327 Scatter-Gather List 00:32:36.327 SGL Command Set: Supported 00:32:36.327 SGL Keyed: Not Supported 00:32:36.327 SGL Bit Bucket Descriptor: Not Supported 00:32:36.327 SGL Metadata Pointer: Not Supported 00:32:36.327 Oversized SGL: Not Supported 00:32:36.328 SGL Metadata Address: Not Supported 00:32:36.328 SGL Offset: Supported 00:32:36.328 Transport SGL Data Block: Not Supported 00:32:36.328 Replay Protected Memory Block: Not Supported 00:32:36.328 00:32:36.328 Firmware Slot Information 00:32:36.328 ========================= 00:32:36.328 Active slot: 0 00:32:36.328 00:32:36.328 Asymmetric Namespace Access 00:32:36.328 =========================== 00:32:36.328 Change Count : 0 00:32:36.328 Number of ANA Group Descriptors : 1 00:32:36.328 ANA Group Descriptor : 0 00:32:36.328 ANA Group ID : 1 00:32:36.328 Number of NSID Values : 1 00:32:36.328 Change Count : 0 00:32:36.328 ANA State : 1 00:32:36.328 Namespace Identifier : 1 00:32:36.328 00:32:36.328 Commands Supported and Effects 00:32:36.328 ============================== 00:32:36.328 Admin Commands 00:32:36.328 -------------- 00:32:36.328 Get Log Page (02h): Supported 00:32:36.328 Identify (06h): Supported 00:32:36.328 Abort (08h): Supported 00:32:36.328 Set Features (09h): Supported 00:32:36.328 Get Features (0Ah): Supported 00:32:36.328 Asynchronous Event Request (0Ch): Supported 00:32:36.328 Keep Alive (18h): Supported 00:32:36.328 I/O Commands 00:32:36.328 ------------ 00:32:36.328 Flush (00h): Supported 00:32:36.328 Write (01h): Supported LBA-Change 00:32:36.328 Read (02h): Supported 00:32:36.328 Write Zeroes (08h): Supported LBA-Change 00:32:36.328 Dataset Management (09h): Supported 00:32:36.328 00:32:36.328 Error Log 00:32:36.328 ========= 00:32:36.328 Entry: 0 00:32:36.328 Error Count: 0x3 00:32:36.328 Submission Queue Id: 0x0 00:32:36.328 Command Id: 0x5 00:32:36.328 Phase Bit: 0 00:32:36.328 Status Code: 0x2 00:32:36.328 Status Code Type: 0x0 00:32:36.328 Do Not Retry: 1 00:32:36.328 Error Location: 0x28 00:32:36.328 LBA: 0x0 00:32:36.328 Namespace: 0x0 00:32:36.328 Vendor Log Page: 0x0 00:32:36.328 ----------- 00:32:36.328 Entry: 1 00:32:36.328 Error Count: 0x2 00:32:36.328 Submission Queue Id: 0x0 00:32:36.328 Command Id: 0x5 00:32:36.328 Phase Bit: 0 00:32:36.328 Status Code: 0x2 00:32:36.328 Status Code Type: 0x0 00:32:36.328 Do Not Retry: 1 00:32:36.328 Error Location: 0x28 00:32:36.328 LBA: 0x0 00:32:36.328 Namespace: 0x0 00:32:36.328 Vendor Log Page: 0x0 00:32:36.328 ----------- 00:32:36.328 Entry: 2 00:32:36.328 Error Count: 0x1 00:32:36.328 Submission Queue Id: 0x0 00:32:36.328 Command Id: 0x4 00:32:36.328 Phase Bit: 0 00:32:36.328 Status Code: 0x2 00:32:36.328 Status Code Type: 0x0 00:32:36.328 Do Not Retry: 1 00:32:36.328 Error Location: 0x28 00:32:36.328 LBA: 0x0 00:32:36.328 Namespace: 0x0 00:32:36.328 Vendor Log Page: 0x0 00:32:36.328 00:32:36.328 Number of Queues 00:32:36.328 ================ 00:32:36.328 Number of I/O Submission Queues: 128 00:32:36.328 Number of I/O Completion Queues: 128 00:32:36.328 00:32:36.328 ZNS Specific Controller Data 00:32:36.328 ============================ 00:32:36.328 Zone Append Size Limit: 0 00:32:36.328 00:32:36.328 00:32:36.328 Active Namespaces 00:32:36.328 ================= 00:32:36.328 get_feature(0x05) failed 00:32:36.328 Namespace ID:1 00:32:36.328 Command Set Identifier: NVM (00h) 00:32:36.328 Deallocate: Supported 00:32:36.328 Deallocated/Unwritten Error: Not Supported 00:32:36.328 Deallocated Read Value: Unknown 00:32:36.328 Deallocate in Write Zeroes: Not Supported 00:32:36.328 Deallocated Guard Field: 0xFFFF 00:32:36.328 Flush: Supported 00:32:36.328 Reservation: Not Supported 00:32:36.328 Namespace Sharing Capabilities: Multiple Controllers 00:32:36.328 Size (in LBAs): 1953525168 (931GiB) 00:32:36.328 Capacity (in LBAs): 1953525168 (931GiB) 00:32:36.328 Utilization (in LBAs): 1953525168 (931GiB) 00:32:36.328 UUID: 4e62a634-69c3-4b4c-bf99-be39bb4f2a46 00:32:36.328 Thin Provisioning: Not Supported 00:32:36.328 Per-NS Atomic Units: Yes 00:32:36.328 Atomic Boundary Size (Normal): 0 00:32:36.328 Atomic Boundary Size (PFail): 0 00:32:36.328 Atomic Boundary Offset: 0 00:32:36.328 NGUID/EUI64 Never Reused: No 00:32:36.328 ANA group ID: 1 00:32:36.328 Namespace Write Protected: No 00:32:36.328 Number of LBA Formats: 1 00:32:36.328 Current LBA Format: LBA Format #00 00:32:36.328 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:36.328 00:32:36.328 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:36.328 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:36.328 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:36.328 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:36.328 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:36.328 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:36.328 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:36.328 rmmod nvme_tcp 00:32:36.589 rmmod nvme_fabrics 00:32:36.590 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:36.590 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:36.590 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:36.590 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:36.590 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:36.590 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:36.590 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:36.590 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:36.590 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:36.590 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.590 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:36.590 01:18:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.520 01:18:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:38.520 01:18:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:38.520 01:18:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:38.520 01:18:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:38.520 01:18:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:38.520 01:18:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:38.520 01:18:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:38.520 01:18:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:38.520 01:18:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:38.520 01:18:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:38.520 01:18:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:39.897 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:39.897 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:39.897 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:39.897 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:39.897 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:39.897 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:39.897 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:39.897 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:39.897 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:39.897 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:39.897 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:39.897 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:39.897 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:39.897 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:39.897 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:39.897 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:40.836 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:40.836 00:32:40.836 real 0m9.350s 00:32:40.836 user 0m1.976s 00:32:40.836 sys 0m3.315s 00:32:40.836 01:18:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:40.836 01:18:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:40.836 ************************************ 00:32:40.836 END TEST nvmf_identify_kernel_target 00:32:40.836 ************************************ 00:32:40.836 01:18:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:40.836 01:18:30 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:40.836 01:18:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:40.836 01:18:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:40.836 01:18:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:40.836 ************************************ 00:32:40.836 START TEST nvmf_auth_host 00:32:40.836 ************************************ 00:32:40.836 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:41.094 * Looking for test storage... 00:32:41.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:41.095 01:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:42.999 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:43.000 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:43.000 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:43.000 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:43.000 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:43.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:43.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:32:43.000 00:32:43.000 --- 10.0.0.2 ping statistics --- 00:32:43.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.000 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:43.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:43.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:32:43.000 00:32:43.000 --- 10.0.0.1 ping statistics --- 00:32:43.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.000 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1281501 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1281501 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1281501 ']' 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:43.000 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.259 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:43.259 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:43.259 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:43.259 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:43.259 01:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b6251cb7c7e378161252b7cddf216625 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.FV0 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b6251cb7c7e378161252b7cddf216625 0 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b6251cb7c7e378161252b7cddf216625 0 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b6251cb7c7e378161252b7cddf216625 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.FV0 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.FV0 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.FV0 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=94fe18b73ddc7e4f617345d0b3e3801c6889cdd364e60d4d30cc9972a50f6fb8 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.di1 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 94fe18b73ddc7e4f617345d0b3e3801c6889cdd364e60d4d30cc9972a50f6fb8 3 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 94fe18b73ddc7e4f617345d0b3e3801c6889cdd364e60d4d30cc9972a50f6fb8 3 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=94fe18b73ddc7e4f617345d0b3e3801c6889cdd364e60d4d30cc9972a50f6fb8 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.di1 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.di1 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.di1 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:43.518 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=250532adc0a8ce611dea0c0f9802c3f8b920ce887f46935f 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.wfr 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 250532adc0a8ce611dea0c0f9802c3f8b920ce887f46935f 0 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 250532adc0a8ce611dea0c0f9802c3f8b920ce887f46935f 0 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=250532adc0a8ce611dea0c0f9802c3f8b920ce887f46935f 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.wfr 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.wfr 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.wfr 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=63f68b42343749ad4530c24ce2dc3d9fc921a6f6b3f5ee2b 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.VP7 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 63f68b42343749ad4530c24ce2dc3d9fc921a6f6b3f5ee2b 2 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 63f68b42343749ad4530c24ce2dc3d9fc921a6f6b3f5ee2b 2 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=63f68b42343749ad4530c24ce2dc3d9fc921a6f6b3f5ee2b 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.VP7 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.VP7 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.VP7 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=42acb7097d42326fb9e81af0dccc2922 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.FFW 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 42acb7097d42326fb9e81af0dccc2922 1 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 42acb7097d42326fb9e81af0dccc2922 1 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=42acb7097d42326fb9e81af0dccc2922 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:43.519 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.FFW 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.FFW 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.FFW 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f452282727c220f52cc01c84b41ffeef 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.VAQ 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f452282727c220f52cc01c84b41ffeef 1 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f452282727c220f52cc01c84b41ffeef 1 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f452282727c220f52cc01c84b41ffeef 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:43.778 01:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.VAQ 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.VAQ 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.VAQ 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=640e7dcfb716b826b19a25ee75b03b759cb8dfbe2255a60f 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ocJ 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 640e7dcfb716b826b19a25ee75b03b759cb8dfbe2255a60f 2 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 640e7dcfb716b826b19a25ee75b03b759cb8dfbe2255a60f 2 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=640e7dcfb716b826b19a25ee75b03b759cb8dfbe2255a60f 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ocJ 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ocJ 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ocJ 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c1c26dfbc9037374209353d9ed5f486f 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Djv 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c1c26dfbc9037374209353d9ed5f486f 0 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c1c26dfbc9037374209353d9ed5f486f 0 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c1c26dfbc9037374209353d9ed5f486f 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Djv 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Djv 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Djv 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=85ad15acfaa2ab1a2ccd1b6a7899dd75de4e0eac74f024ad6ec9af138d6695d6 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.sSz 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 85ad15acfaa2ab1a2ccd1b6a7899dd75de4e0eac74f024ad6ec9af138d6695d6 3 00:32:43.778 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 85ad15acfaa2ab1a2ccd1b6a7899dd75de4e0eac74f024ad6ec9af138d6695d6 3 00:32:43.779 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:43.779 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:43.779 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=85ad15acfaa2ab1a2ccd1b6a7899dd75de4e0eac74f024ad6ec9af138d6695d6 00:32:43.779 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:43.779 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:44.037 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.sSz 00:32:44.037 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.sSz 00:32:44.037 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.sSz 00:32:44.037 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:44.037 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1281501 00:32:44.037 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1281501 ']' 00:32:44.037 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:44.037 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:44.037 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:44.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:44.037 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:44.037 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FV0 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.di1 ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.di1 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.wfr 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.VP7 ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VP7 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.FFW 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.VAQ ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.VAQ 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ocJ 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Djv ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Djv 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.sSz 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:44.294 01:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:45.667 Waiting for block devices as requested 00:32:45.667 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:45.667 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:45.667 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:45.667 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:45.927 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:45.927 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:45.927 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:46.185 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:46.185 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:46.185 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:46.185 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:46.445 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:46.445 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:46.445 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:46.445 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:46.704 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:46.704 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:47.271 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:47.271 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:47.271 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:47.271 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:47.271 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:47.271 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:47.271 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:47.271 01:18:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:47.271 01:18:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:47.271 No valid GPT data, bailing 00:32:47.271 01:18:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:47.271 01:18:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:47.271 01:18:36 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:47.271 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:47.271 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:47.272 00:32:47.272 Discovery Log Number of Records 2, Generation counter 2 00:32:47.272 =====Discovery Log Entry 0====== 00:32:47.272 trtype: tcp 00:32:47.272 adrfam: ipv4 00:32:47.272 subtype: current discovery subsystem 00:32:47.272 treq: not specified, sq flow control disable supported 00:32:47.272 portid: 1 00:32:47.272 trsvcid: 4420 00:32:47.272 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:47.272 traddr: 10.0.0.1 00:32:47.272 eflags: none 00:32:47.272 sectype: none 00:32:47.272 =====Discovery Log Entry 1====== 00:32:47.272 trtype: tcp 00:32:47.272 adrfam: ipv4 00:32:47.272 subtype: nvme subsystem 00:32:47.272 treq: not specified, sq flow control disable supported 00:32:47.272 portid: 1 00:32:47.272 trsvcid: 4420 00:32:47.272 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:47.272 traddr: 10.0.0.1 00:32:47.272 eflags: none 00:32:47.272 sectype: none 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: ]] 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.272 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.531 nvme0n1 00:32:47.531 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.531 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.531 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.531 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.531 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.531 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.531 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.531 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.531 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.531 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.531 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: ]] 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.532 01:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.792 nvme0n1 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: ]] 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:32:47.792 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.793 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.053 nvme0n1 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: ]] 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.053 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:48.054 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.054 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.312 nvme0n1 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: ]] 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.312 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.313 nvme0n1 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.313 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.571 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.571 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.571 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.571 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.571 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.571 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.571 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:48.571 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.571 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.571 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.571 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:48.571 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:32:48.571 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:48.571 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.571 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.572 nvme0n1 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.572 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: ]] 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.830 01:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.831 nvme0n1 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.831 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: ]] 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.089 nvme0n1 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.089 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: ]] 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.090 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.349 nvme0n1 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: ]] 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.349 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.608 nvme0n1 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.608 01:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.608 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.608 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.608 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.608 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.867 nvme0n1 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.867 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: ]] 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.127 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.387 nvme0n1 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: ]] 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.387 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.648 nvme0n1 00:32:50.648 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.648 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.648 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.648 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.648 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.648 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.648 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.648 01:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.648 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.648 01:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: ]] 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.648 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.907 nvme0n1 00:32:50.907 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.907 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.907 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.907 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.907 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.907 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: ]] 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.167 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.168 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.168 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.168 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.168 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.168 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:51.168 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.168 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.426 nvme0n1 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.426 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.689 nvme0n1 00:32:51.689 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.689 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.689 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.689 01:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.689 01:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: ]] 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.689 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.288 nvme0n1 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: ]] 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.288 01:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.856 nvme0n1 00:32:52.856 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.856 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.856 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.856 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.856 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.856 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.856 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.856 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.856 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.856 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: ]] 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.114 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:53.115 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.115 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.682 nvme0n1 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: ]] 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.682 01:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.249 nvme0n1 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.249 01:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.818 nvme0n1 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: ]] 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.818 01:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.755 nvme0n1 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: ]] 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.755 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.014 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.014 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.014 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.014 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.014 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.014 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.014 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.014 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.014 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.014 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.014 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.014 01:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.014 01:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:56.014 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.014 01:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.952 nvme0n1 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: ]] 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.952 01:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.886 nvme0n1 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: ]] 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.886 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.887 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.887 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.887 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.887 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.887 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.887 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.887 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.887 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.887 01:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.887 01:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:57.887 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.887 01:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.825 nvme0n1 00:32:58.825 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.825 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.825 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.825 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.825 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.825 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.085 01:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.025 nvme0n1 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: ]] 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.025 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.026 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.026 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.026 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.026 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.026 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.026 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.026 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:00.026 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.026 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.286 nvme0n1 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: ]] 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.286 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.545 nvme0n1 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: ]] 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.545 nvme0n1 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.545 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.803 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.803 01:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.803 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.803 01:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: ]] 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.803 nvme0n1 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.803 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.062 nvme0n1 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: ]] 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.062 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.321 nvme0n1 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: ]] 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.321 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.580 nvme0n1 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: ]] 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.580 01:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.838 nvme0n1 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: ]] 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:01.838 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.839 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:01.839 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.839 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.097 nvme0n1 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.097 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.356 nvme0n1 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: ]] 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.356 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.616 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.616 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.616 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.616 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.616 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.616 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.616 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.616 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.616 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.616 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.616 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.616 01:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.616 01:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:02.616 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.616 01:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.876 nvme0n1 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: ]] 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.876 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.136 nvme0n1 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: ]] 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:03.136 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.137 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:03.137 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.137 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.137 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.137 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.137 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.137 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.137 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.137 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.137 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.137 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.137 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.137 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.137 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.137 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.137 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:03.137 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.137 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.395 nvme0n1 00:33:03.395 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.395 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.395 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.395 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.395 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.395 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: ]] 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.656 01:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.917 nvme0n1 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.917 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.177 nvme0n1 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: ]] 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.177 01:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.746 nvme0n1 00:33:04.746 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.746 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.746 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.746 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.746 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.746 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: ]] 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.747 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.317 nvme0n1 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:05.317 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: ]] 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.606 01:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.174 nvme0n1 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: ]] 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.174 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.738 nvme0n1 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.738 01:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.306 nvme0n1 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: ]] 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.306 01:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.242 nvme0n1 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: ]] 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.242 01:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.179 nvme0n1 00:33:09.179 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.179 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.179 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.179 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.179 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.179 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: ]] 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.438 01:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.374 nvme0n1 00:33:10.374 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.374 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: ]] 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.375 01:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.307 nvme0n1 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.307 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:11.308 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:11.308 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:11.308 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.308 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:11.308 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.308 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.566 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.566 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.566 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.566 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.566 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.566 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.566 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.566 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.566 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.566 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.566 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.566 01:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.566 01:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:11.566 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.566 01:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.505 nvme0n1 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:12.505 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: ]] 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.506 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.765 nvme0n1 00:33:12.765 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.765 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: ]] 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.766 01:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.766 nvme0n1 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.766 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: ]] 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.025 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.026 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.026 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.026 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:13.026 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.026 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.026 nvme0n1 00:33:13.026 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.026 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.026 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.026 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.026 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.026 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: ]] 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.285 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.286 nvme0n1 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.286 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.547 nvme0n1 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: ]] 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.547 01:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.806 nvme0n1 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: ]] 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.806 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.065 nvme0n1 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: ]] 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.065 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.323 nvme0n1 00:33:14.323 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.323 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.323 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.323 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.323 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.323 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: ]] 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.324 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.582 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.582 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.583 nvme0n1 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:14.583 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:14.840 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:14.840 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:14.840 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:14.840 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:14.840 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.840 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:14.840 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:14.840 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:14.840 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.840 01:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:14.840 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.840 01:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.840 nvme0n1 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.840 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: ]] 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.100 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.359 nvme0n1 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: ]] 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.359 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.618 nvme0n1 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: ]] 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.618 01:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.890 nvme0n1 00:33:15.890 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.890 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.890 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.890 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.890 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.890 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: ]] 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.150 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.410 nvme0n1 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:16.410 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.411 01:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.670 nvme0n1 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: ]] 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.670 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.929 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.930 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.930 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.930 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.930 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.930 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.930 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.930 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.930 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.930 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.930 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.930 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.930 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:16.930 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.930 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.188 nvme0n1 00:33:17.188 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.188 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.188 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.188 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.188 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.188 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: ]] 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.449 01:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.020 nvme0n1 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: ]] 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.020 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.590 nvme0n1 00:33:18.590 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.590 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.590 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.590 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.590 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.590 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.590 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: ]] 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.591 01:19:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.194 nvme0n1 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.194 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.764 nvme0n1 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjYyNTFjYjdjN2UzNzgxNjEyNTJiN2NkZGYyMTY2MjWzF6YB: 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: ]] 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTRmZTE4YjczZGRjN2U0ZjYxNzM0NWQwYjNlMzgwMWM2ODg5Y2RkMzY0ZTYwZDRkMzBjYzk5NzJhNTBmNmZiOPxOj5c=: 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.764 01:19:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.703 nvme0n1 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: ]] 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.703 01:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.644 nvme0n1 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDJhY2I3MDk3ZDQyMzI2ZmI5ZTgxYWYwZGNjYzI5MjJlo+lP: 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: ]] 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjQ1MjI4MjcyN2MyMjBmNTJjYzAxYzg0YjQxZmZlZWZZVc43: 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.644 01:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.581 nvme0n1 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:22.581 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjQwZTdkY2ZiNzE2YjgyNmIxOWEyNWVlNzViMDNiNzU5Y2I4ZGZiZTIyNTVhNjBmJ1spZQ==: 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: ]] 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzFjMjZkZmJjOTAzNzM3NDIwOTM1M2Q5ZWQ1ZjQ4NmZA1xyf: 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.582 01:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.521 nvme0n1 00:33:23.521 01:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.521 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:23.521 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:23.521 01:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.521 01:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.521 01:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.521 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:23.521 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:23.521 01:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.521 01:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODVhZDE1YWNmYWEyYWIxYTJjY2QxYjZhNzg5OWRkNzVkZTRlMGVhYzc0ZjAyNGFkNmVjOWFmMTM4ZDY2OTVkNjclKYE=: 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:23.779 01:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.780 01:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.718 nvme0n1 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjUwNTMyYWRjMGE4Y2U2MTFkZWEwYzBmOTgwMmMzZjhiOTIwY2U4ODdmNDY5MzVmZNsdpg==: 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: ]] 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjNmNjhiNDIzNDM3NDlhZDQ1MzBjMjRjZTJkYzNkOWZjOTIxYTZmNmIzZjVlZTJivos7rw==: 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.718 01:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.718 request: 00:33:24.718 { 00:33:24.718 "name": "nvme0", 00:33:24.718 "trtype": "tcp", 00:33:24.718 "traddr": "10.0.0.1", 00:33:24.718 "adrfam": "ipv4", 00:33:24.718 "trsvcid": "4420", 00:33:24.718 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:24.718 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:24.718 "prchk_reftag": false, 00:33:24.718 "prchk_guard": false, 00:33:24.718 "hdgst": false, 00:33:24.718 "ddgst": false, 00:33:24.718 "method": "bdev_nvme_attach_controller", 00:33:24.718 "req_id": 1 00:33:24.718 } 00:33:24.718 Got JSON-RPC error response 00:33:24.718 response: 00:33:24.718 { 00:33:24.718 "code": -5, 00:33:24.718 "message": "Input/output error" 00:33:24.718 } 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:24.718 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:24.719 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:24.719 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.719 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.978 request: 00:33:24.978 { 00:33:24.978 "name": "nvme0", 00:33:24.978 "trtype": "tcp", 00:33:24.978 "traddr": "10.0.0.1", 00:33:24.978 "adrfam": "ipv4", 00:33:24.978 "trsvcid": "4420", 00:33:24.978 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:24.978 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:24.978 "prchk_reftag": false, 00:33:24.978 "prchk_guard": false, 00:33:24.978 "hdgst": false, 00:33:24.978 "ddgst": false, 00:33:24.978 "dhchap_key": "key2", 00:33:24.978 "method": "bdev_nvme_attach_controller", 00:33:24.978 "req_id": 1 00:33:24.978 } 00:33:24.978 Got JSON-RPC error response 00:33:24.978 response: 00:33:24.978 { 00:33:24.978 "code": -5, 00:33:24.978 "message": "Input/output error" 00:33:24.978 } 00:33:24.978 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:24.978 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:24.978 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:24.978 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:24.978 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:24.978 01:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:24.978 01:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:24.978 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.978 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.978 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.978 01:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:24.978 01:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:24.978 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:24.978 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:24.979 request: 00:33:24.979 { 00:33:24.979 "name": "nvme0", 00:33:24.979 "trtype": "tcp", 00:33:24.979 "traddr": "10.0.0.1", 00:33:24.979 "adrfam": "ipv4", 00:33:24.979 "trsvcid": "4420", 00:33:24.979 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:24.979 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:24.979 "prchk_reftag": false, 00:33:24.979 "prchk_guard": false, 00:33:24.979 "hdgst": false, 00:33:24.979 "ddgst": false, 00:33:24.979 "dhchap_key": "key1", 00:33:24.979 "dhchap_ctrlr_key": "ckey2", 00:33:24.979 "method": "bdev_nvme_attach_controller", 00:33:24.979 "req_id": 1 00:33:24.979 } 00:33:24.979 Got JSON-RPC error response 00:33:24.979 response: 00:33:24.979 { 00:33:24.979 "code": -5, 00:33:24.979 "message": "Input/output error" 00:33:24.979 } 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:24.979 rmmod nvme_tcp 00:33:24.979 rmmod nvme_fabrics 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1281501 ']' 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1281501 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1281501 ']' 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1281501 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:24.979 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1281501 00:33:25.237 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:25.237 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:25.237 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1281501' 00:33:25.237 killing process with pid 1281501 00:33:25.237 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1281501 00:33:25.237 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1281501 00:33:25.237 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:25.237 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:25.237 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:25.237 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:25.237 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:25.237 01:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.237 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:25.237 01:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.771 01:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:27.771 01:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:27.771 01:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:27.771 01:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:27.771 01:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:27.771 01:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:27.771 01:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:27.771 01:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:27.771 01:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:27.771 01:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:27.771 01:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:27.771 01:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:27.771 01:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:28.707 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:28.707 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:28.707 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:28.708 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:28.708 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:28.708 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:28.708 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:28.708 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:28.708 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:28.708 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:28.708 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:28.708 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:28.708 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:28.708 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:28.708 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:28.708 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:29.647 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:29.647 01:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.FV0 /tmp/spdk.key-null.wfr /tmp/spdk.key-sha256.FFW /tmp/spdk.key-sha384.ocJ /tmp/spdk.key-sha512.sSz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:29.647 01:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:31.021 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:31.021 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:31.021 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:31.021 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:31.021 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:31.021 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:31.021 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:31.021 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:31.021 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:31.021 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:31.021 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:31.021 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:31.021 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:31.021 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:31.021 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:31.021 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:31.021 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:31.021 00:33:31.021 real 0m50.007s 00:33:31.021 user 0m47.895s 00:33:31.021 sys 0m5.879s 00:33:31.021 01:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:31.021 01:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.021 ************************************ 00:33:31.021 END TEST nvmf_auth_host 00:33:31.021 ************************************ 00:33:31.021 01:19:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:31.021 01:19:20 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:31.021 01:19:20 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:31.021 01:19:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:31.021 01:19:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:31.021 01:19:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:31.021 ************************************ 00:33:31.021 START TEST nvmf_digest 00:33:31.021 ************************************ 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:31.021 * Looking for test storage... 00:33:31.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:31.021 01:19:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:32.923 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:32.923 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:32.923 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:32.923 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:32.923 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:33.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:33.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:33:33.183 00:33:33.183 --- 10.0.0.2 ping statistics --- 00:33:33.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.183 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:33.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:33.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:33:33.183 00:33:33.183 --- 10.0.0.1 ping statistics --- 00:33:33.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.183 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:33.183 ************************************ 00:33:33.183 START TEST nvmf_digest_clean 00:33:33.183 ************************************ 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1290980 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:33.183 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1290980 00:33:33.184 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1290980 ']' 00:33:33.184 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.184 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:33.184 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.184 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:33.184 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:33.184 [2024-07-14 01:19:22.501389] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:33.184 [2024-07-14 01:19:22.501482] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:33.184 EAL: No free 2048 kB hugepages reported on node 1 00:33:33.184 [2024-07-14 01:19:22.566767] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.442 [2024-07-14 01:19:22.654588] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.442 [2024-07-14 01:19:22.654650] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.442 [2024-07-14 01:19:22.654678] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.442 [2024-07-14 01:19:22.654690] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.442 [2024-07-14 01:19:22.654699] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.442 [2024-07-14 01:19:22.654734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.442 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:33.442 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:33.442 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:33.442 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:33.442 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:33.442 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:33.442 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:33.442 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:33.442 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:33.442 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.442 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:33.442 null0 00:33:33.442 [2024-07-14 01:19:22.846396] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.699 [2024-07-14 01:19:22.870620] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.699 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.699 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:33.699 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:33.699 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:33.699 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:33.699 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:33.699 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:33.699 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:33.699 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1291003 00:33:33.699 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:33.699 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1291003 /var/tmp/bperf.sock 00:33:33.699 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1291003 ']' 00:33:33.699 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:33.699 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:33.699 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:33.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:33.699 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:33.699 01:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:33.699 [2024-07-14 01:19:22.920884] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:33.699 [2024-07-14 01:19:22.920962] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291003 ] 00:33:33.699 EAL: No free 2048 kB hugepages reported on node 1 00:33:33.699 [2024-07-14 01:19:22.989293] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.699 [2024-07-14 01:19:23.083321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.958 01:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:33.958 01:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:33.958 01:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:33.958 01:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:33.958 01:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:34.216 01:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:34.216 01:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:34.473 nvme0n1 00:33:34.473 01:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:34.474 01:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:34.763 Running I/O for 2 seconds... 00:33:36.667 00:33:36.667 Latency(us) 00:33:36.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.667 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:36.667 nvme0n1 : 2.00 19317.75 75.46 0.00 0.00 6616.50 2985.53 11213.94 00:33:36.667 =================================================================================================================== 00:33:36.667 Total : 19317.75 75.46 0.00 0.00 6616.50 2985.53 11213.94 00:33:36.667 0 00:33:36.667 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:36.667 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:36.667 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:36.667 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:36.667 | select(.opcode=="crc32c") 00:33:36.667 | "\(.module_name) \(.executed)"' 00:33:36.668 01:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:36.926 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:36.926 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:36.926 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:36.926 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:36.926 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1291003 00:33:36.926 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1291003 ']' 00:33:36.926 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1291003 00:33:36.926 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:36.926 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:36.926 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1291003 00:33:36.926 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:36.926 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:36.926 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1291003' 00:33:36.926 killing process with pid 1291003 00:33:36.926 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1291003 00:33:36.926 Received shutdown signal, test time was about 2.000000 seconds 00:33:36.926 00:33:36.926 Latency(us) 00:33:36.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.926 =================================================================================================================== 00:33:36.926 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:36.926 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1291003 00:33:37.184 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:37.184 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:37.184 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:37.184 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:37.184 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:37.184 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:37.184 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:37.184 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1291415 00:33:37.184 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:37.184 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1291415 /var/tmp/bperf.sock 00:33:37.184 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1291415 ']' 00:33:37.184 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:37.184 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:37.184 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:37.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:37.184 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:37.184 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:37.184 [2024-07-14 01:19:26.530825] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:37.184 [2024-07-14 01:19:26.530924] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291415 ] 00:33:37.184 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:37.184 Zero copy mechanism will not be used. 00:33:37.184 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.184 [2024-07-14 01:19:26.588773] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.442 [2024-07-14 01:19:26.678654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:37.442 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:37.442 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:37.442 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:37.442 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:37.442 01:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:37.701 01:19:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:37.701 01:19:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:38.270 nvme0n1 00:33:38.270 01:19:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:38.270 01:19:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:38.270 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:38.270 Zero copy mechanism will not be used. 00:33:38.270 Running I/O for 2 seconds... 00:33:40.179 00:33:40.179 Latency(us) 00:33:40.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.179 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:40.179 nvme0n1 : 2.01 2656.16 332.02 0.00 0.00 6019.59 5704.06 14272.28 00:33:40.179 =================================================================================================================== 00:33:40.179 Total : 2656.16 332.02 0.00 0.00 6019.59 5704.06 14272.28 00:33:40.179 0 00:33:40.179 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:40.179 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:40.179 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:40.179 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:40.179 | select(.opcode=="crc32c") 00:33:40.179 | "\(.module_name) \(.executed)"' 00:33:40.179 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:40.437 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:40.437 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:40.437 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:40.437 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:40.437 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1291415 00:33:40.437 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1291415 ']' 00:33:40.437 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1291415 00:33:40.437 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:40.437 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:40.437 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1291415 00:33:40.696 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:40.696 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:40.696 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1291415' 00:33:40.696 killing process with pid 1291415 00:33:40.696 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1291415 00:33:40.696 Received shutdown signal, test time was about 2.000000 seconds 00:33:40.696 00:33:40.696 Latency(us) 00:33:40.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.696 =================================================================================================================== 00:33:40.697 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:40.697 01:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1291415 00:33:40.697 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:40.697 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:40.697 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:40.697 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:40.697 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:40.697 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:40.697 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:40.697 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1291819 00:33:40.697 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:40.697 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1291819 /var/tmp/bperf.sock 00:33:40.697 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1291819 ']' 00:33:40.697 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:40.697 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:40.697 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:40.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:40.697 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:40.697 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:40.956 [2024-07-14 01:19:30.136501] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:40.956 [2024-07-14 01:19:30.136592] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291819 ] 00:33:40.956 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.956 [2024-07-14 01:19:30.194938] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.956 [2024-07-14 01:19:30.280704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.956 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:40.956 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:40.956 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:40.956 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:40.956 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:41.522 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:41.522 01:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:41.780 nvme0n1 00:33:41.780 01:19:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:41.780 01:19:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:42.038 Running I/O for 2 seconds... 00:33:43.947 00:33:43.947 Latency(us) 00:33:43.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.947 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:43.947 nvme0n1 : 2.00 21524.25 84.08 0.00 0.00 5937.58 3301.07 12815.93 00:33:43.947 =================================================================================================================== 00:33:43.947 Total : 21524.25 84.08 0.00 0.00 5937.58 3301.07 12815.93 00:33:43.947 0 00:33:43.947 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:43.947 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:43.947 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:43.947 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:43.947 | select(.opcode=="crc32c") 00:33:43.947 | "\(.module_name) \(.executed)"' 00:33:43.947 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:44.205 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:44.205 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:44.205 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:44.205 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:44.205 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1291819 00:33:44.205 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1291819 ']' 00:33:44.205 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1291819 00:33:44.205 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:44.205 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:44.205 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1291819 00:33:44.205 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:44.205 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:44.205 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1291819' 00:33:44.205 killing process with pid 1291819 00:33:44.205 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1291819 00:33:44.205 Received shutdown signal, test time was about 2.000000 seconds 00:33:44.205 00:33:44.205 Latency(us) 00:33:44.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.205 =================================================================================================================== 00:33:44.205 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:44.205 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1291819 00:33:44.463 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:44.463 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:44.463 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:44.463 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:44.463 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:44.463 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:44.463 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:44.463 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1292239 00:33:44.463 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:44.463 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1292239 /var/tmp/bperf.sock 00:33:44.463 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1292239 ']' 00:33:44.463 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:44.463 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:44.463 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:44.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:44.463 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:44.463 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:44.463 [2024-07-14 01:19:33.785170] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:44.463 [2024-07-14 01:19:33.785265] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292239 ] 00:33:44.463 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:44.463 Zero copy mechanism will not be used. 00:33:44.463 EAL: No free 2048 kB hugepages reported on node 1 00:33:44.463 [2024-07-14 01:19:33.850486] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.720 [2024-07-14 01:19:33.941515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.720 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:44.720 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:44.720 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:44.720 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:44.720 01:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:44.979 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:44.979 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:45.237 nvme0n1 00:33:45.237 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:45.237 01:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:45.496 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:45.496 Zero copy mechanism will not be used. 00:33:45.496 Running I/O for 2 seconds... 00:33:47.392 00:33:47.392 Latency(us) 00:33:47.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.392 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:47.392 nvme0n1 : 2.01 1690.77 211.35 0.00 0.00 9437.73 3422.44 14660.65 00:33:47.392 =================================================================================================================== 00:33:47.392 Total : 1690.77 211.35 0.00 0.00 9437.73 3422.44 14660.65 00:33:47.392 0 00:33:47.392 01:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:47.392 01:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:47.392 01:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:47.392 01:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:47.392 01:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:47.392 | select(.opcode=="crc32c") 00:33:47.392 | "\(.module_name) \(.executed)"' 00:33:47.650 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:47.650 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:47.650 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:47.650 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:47.650 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1292239 00:33:47.650 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1292239 ']' 00:33:47.650 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1292239 00:33:47.650 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:47.650 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:47.650 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1292239 00:33:47.650 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:47.650 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:47.650 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1292239' 00:33:47.650 killing process with pid 1292239 00:33:47.650 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1292239 00:33:47.650 Received shutdown signal, test time was about 2.000000 seconds 00:33:47.650 00:33:47.650 Latency(us) 00:33:47.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.650 =================================================================================================================== 00:33:47.650 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:47.650 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1292239 00:33:47.908 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1290980 00:33:47.908 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1290980 ']' 00:33:47.908 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1290980 00:33:47.908 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:47.908 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:47.908 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1290980 00:33:47.908 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:47.908 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:47.908 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1290980' 00:33:47.908 killing process with pid 1290980 00:33:47.908 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1290980 00:33:47.908 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1290980 00:33:48.166 00:33:48.166 real 0m15.024s 00:33:48.166 user 0m30.271s 00:33:48.166 sys 0m3.801s 00:33:48.166 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:48.166 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:48.166 ************************************ 00:33:48.166 END TEST nvmf_digest_clean 00:33:48.166 ************************************ 00:33:48.166 01:19:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:48.166 01:19:37 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:48.166 01:19:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:48.166 01:19:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:48.166 01:19:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:48.166 ************************************ 00:33:48.166 START TEST nvmf_digest_error 00:33:48.166 ************************************ 00:33:48.166 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:33:48.166 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:48.166 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:48.166 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:48.166 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:48.166 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1292776 00:33:48.166 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:48.166 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1292776 00:33:48.166 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1292776 ']' 00:33:48.167 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.167 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:48.167 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:48.167 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:48.167 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:48.167 [2024-07-14 01:19:37.577050] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:48.167 [2024-07-14 01:19:37.577129] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:48.426 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.426 [2024-07-14 01:19:37.640810] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.426 [2024-07-14 01:19:37.726636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:48.426 [2024-07-14 01:19:37.726713] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:48.426 [2024-07-14 01:19:37.726726] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:48.426 [2024-07-14 01:19:37.726737] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:48.426 [2024-07-14 01:19:37.726747] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:48.426 [2024-07-14 01:19:37.726780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.426 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:48.426 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:48.426 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:48.426 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:48.426 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:48.426 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:48.426 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:48.426 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.426 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:48.426 [2024-07-14 01:19:37.815401] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:48.426 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.426 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:48.426 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:48.426 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.426 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:48.684 null0 00:33:48.684 [2024-07-14 01:19:37.934242] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:48.684 [2024-07-14 01:19:37.958454] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.684 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.684 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:48.684 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:48.684 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:48.684 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:48.684 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:48.684 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1292802 00:33:48.684 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:48.684 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1292802 /var/tmp/bperf.sock 00:33:48.684 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1292802 ']' 00:33:48.684 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:48.684 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:48.684 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:48.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:48.684 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:48.684 01:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:48.684 [2024-07-14 01:19:38.004330] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:48.684 [2024-07-14 01:19:38.004392] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292802 ] 00:33:48.684 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.684 [2024-07-14 01:19:38.064504] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.941 [2024-07-14 01:19:38.155337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:48.941 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:48.941 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:48.941 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:48.941 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:49.198 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:49.198 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.198 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:49.198 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.198 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:49.198 01:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:49.763 nvme0n1 00:33:49.763 01:19:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:49.763 01:19:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.763 01:19:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:49.763 01:19:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.763 01:19:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:49.763 01:19:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:49.763 Running I/O for 2 seconds... 00:33:49.763 [2024-07-14 01:19:39.137304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:49.763 [2024-07-14 01:19:39.137367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.763 [2024-07-14 01:19:39.137388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.763 [2024-07-14 01:19:39.152068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:49.763 [2024-07-14 01:19:39.152101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.763 [2024-07-14 01:19:39.152118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.763 [2024-07-14 01:19:39.164933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:49.763 [2024-07-14 01:19:39.164965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.763 [2024-07-14 01:19:39.164988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.179863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.179931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.179949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.193715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.193750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.193770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.209126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.209178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.209206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.220756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.220789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.220808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.234809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.234843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.234861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.249153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.249212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.249234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.263541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.263575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.263599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.275261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.275295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.275313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.289134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.289193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.289212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.303438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.303473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.303494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.315399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.315434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.315453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.330736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.330770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.330790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.344315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.344349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.344375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.356749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.356783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.356803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.371203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.371237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.371256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.383937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.383969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.383986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.397842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.397910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.397929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.410495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.410530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.410550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.425708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.425744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.425764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.438547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.438584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.438610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.050 [2024-07-14 01:19:39.451861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.050 [2024-07-14 01:19:39.451929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.050 [2024-07-14 01:19:39.451947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.465880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.465929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.465946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.482465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.482500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.482520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.495532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.495567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.495587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.508761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.508795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.508814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.521183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.521217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.521237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.535196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.535230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.535248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.549586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.549623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.549643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.564567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.564608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.564628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.576361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.576397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.576415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.591118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.591162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.591181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.604130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.604178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.604197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.617064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.617095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.617114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.630090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.630120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.630154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.645829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.645863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.645900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.659449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.659484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.659504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.672443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.672477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.672496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.684853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.684913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.684933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.698670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.698704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.698723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.310 [2024-07-14 01:19:39.714551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.310 [2024-07-14 01:19:39.714586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.310 [2024-07-14 01:19:39.714605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.569 [2024-07-14 01:19:39.726764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.569 [2024-07-14 01:19:39.726800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.569 [2024-07-14 01:19:39.726819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.569 [2024-07-14 01:19:39.743924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.569 [2024-07-14 01:19:39.743958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.569 [2024-07-14 01:19:39.743978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.569 [2024-07-14 01:19:39.755309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.569 [2024-07-14 01:19:39.755344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.569 [2024-07-14 01:19:39.755363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.569 [2024-07-14 01:19:39.769718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.569 [2024-07-14 01:19:39.769754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.569 [2024-07-14 01:19:39.769773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.569 [2024-07-14 01:19:39.785210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.569 [2024-07-14 01:19:39.785245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.569 [2024-07-14 01:19:39.785266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.569 [2024-07-14 01:19:39.797707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.569 [2024-07-14 01:19:39.797742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.569 [2024-07-14 01:19:39.797766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.569 [2024-07-14 01:19:39.810984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.569 [2024-07-14 01:19:39.811014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.569 [2024-07-14 01:19:39.811032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.569 [2024-07-14 01:19:39.823692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.569 [2024-07-14 01:19:39.823725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.569 [2024-07-14 01:19:39.823745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.569 [2024-07-14 01:19:39.838545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.569 [2024-07-14 01:19:39.838580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.569 [2024-07-14 01:19:39.838599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.569 [2024-07-14 01:19:39.850203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.570 [2024-07-14 01:19:39.850237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.570 [2024-07-14 01:19:39.850256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.570 [2024-07-14 01:19:39.864608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.570 [2024-07-14 01:19:39.864644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.570 [2024-07-14 01:19:39.864663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.570 [2024-07-14 01:19:39.877786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.570 [2024-07-14 01:19:39.877820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.570 [2024-07-14 01:19:39.877840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.570 [2024-07-14 01:19:39.892787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.570 [2024-07-14 01:19:39.892821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.570 [2024-07-14 01:19:39.892840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.570 [2024-07-14 01:19:39.904636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.570 [2024-07-14 01:19:39.904671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.570 [2024-07-14 01:19:39.904689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.570 [2024-07-14 01:19:39.919645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.570 [2024-07-14 01:19:39.919685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.570 [2024-07-14 01:19:39.919704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.570 [2024-07-14 01:19:39.932714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.570 [2024-07-14 01:19:39.932749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.570 [2024-07-14 01:19:39.932767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.570 [2024-07-14 01:19:39.944222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.570 [2024-07-14 01:19:39.944270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.570 [2024-07-14 01:19:39.944289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.570 [2024-07-14 01:19:39.959422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.570 [2024-07-14 01:19:39.959456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.570 [2024-07-14 01:19:39.959475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.570 [2024-07-14 01:19:39.974531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.570 [2024-07-14 01:19:39.974566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.570 [2024-07-14 01:19:39.974585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.830 [2024-07-14 01:19:39.991658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.830 [2024-07-14 01:19:39.991694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.830 [2024-07-14 01:19:39.991714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.830 [2024-07-14 01:19:40.003521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.830 [2024-07-14 01:19:40.003555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.830 [2024-07-14 01:19:40.003575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.830 [2024-07-14 01:19:40.020281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.830 [2024-07-14 01:19:40.020334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.830 [2024-07-14 01:19:40.020355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.830 [2024-07-14 01:19:40.032344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.830 [2024-07-14 01:19:40.032380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.830 [2024-07-14 01:19:40.032400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.830 [2024-07-14 01:19:40.048030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.830 [2024-07-14 01:19:40.048066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.830 [2024-07-14 01:19:40.048085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.830 [2024-07-14 01:19:40.062969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.830 [2024-07-14 01:19:40.063001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.830 [2024-07-14 01:19:40.063018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.830 [2024-07-14 01:19:40.073182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.830 [2024-07-14 01:19:40.073226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.830 [2024-07-14 01:19:40.073242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.830 [2024-07-14 01:19:40.086643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.830 [2024-07-14 01:19:40.086673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.830 [2024-07-14 01:19:40.086689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.830 [2024-07-14 01:19:40.101778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.830 [2024-07-14 01:19:40.101813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.830 [2024-07-14 01:19:40.101832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.830 [2024-07-14 01:19:40.113713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.830 [2024-07-14 01:19:40.113748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.830 [2024-07-14 01:19:40.113768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.830 [2024-07-14 01:19:40.128548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.830 [2024-07-14 01:19:40.128583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.830 [2024-07-14 01:19:40.128603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.830 [2024-07-14 01:19:40.142973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.830 [2024-07-14 01:19:40.143005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.830 [2024-07-14 01:19:40.143022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.830 [2024-07-14 01:19:40.154518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.830 [2024-07-14 01:19:40.154553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.830 [2024-07-14 01:19:40.154579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.830 [2024-07-14 01:19:40.169255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.830 [2024-07-14 01:19:40.169290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.830 [2024-07-14 01:19:40.169309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.830 [2024-07-14 01:19:40.183374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.830 [2024-07-14 01:19:40.183409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.830 [2024-07-14 01:19:40.183428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.830 [2024-07-14 01:19:40.195993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.830 [2024-07-14 01:19:40.196025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.830 [2024-07-14 01:19:40.196042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.830 [2024-07-14 01:19:40.209352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.831 [2024-07-14 01:19:40.209387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.831 [2024-07-14 01:19:40.209406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.831 [2024-07-14 01:19:40.222764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.831 [2024-07-14 01:19:40.222797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.831 [2024-07-14 01:19:40.222815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.831 [2024-07-14 01:19:40.237481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:50.831 [2024-07-14 01:19:40.237515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.831 [2024-07-14 01:19:40.237535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.251771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.251806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.251825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.263505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.263540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.263559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.280190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.280224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.280243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.294132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.294178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.294197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.306007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.306035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.306050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.321056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.321086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.321102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.333331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.333366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.333385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.347138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.347169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.347203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.360990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.361022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.361039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.377064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.377111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.377128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.389171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.389220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.389245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.404394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.404428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.404447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.415840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.415881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.415902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.431919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.431948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.431978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.445740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.445775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.445794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.458601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.458636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.458655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.472360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.472395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.472414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.486485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.486520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.486539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.090 [2024-07-14 01:19:40.498811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.090 [2024-07-14 01:19:40.498845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.090 [2024-07-14 01:19:40.498864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.350 [2024-07-14 01:19:40.514535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.350 [2024-07-14 01:19:40.514579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.350 [2024-07-14 01:19:40.514599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.350 [2024-07-14 01:19:40.526418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.350 [2024-07-14 01:19:40.526454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.350 [2024-07-14 01:19:40.526473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.350 [2024-07-14 01:19:40.540720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.350 [2024-07-14 01:19:40.540755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.350 [2024-07-14 01:19:40.540774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.350 [2024-07-14 01:19:40.553618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.350 [2024-07-14 01:19:40.553652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.350 [2024-07-14 01:19:40.553671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.350 [2024-07-14 01:19:40.567667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.350 [2024-07-14 01:19:40.567702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.350 [2024-07-14 01:19:40.567721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.350 [2024-07-14 01:19:40.581284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.350 [2024-07-14 01:19:40.581318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.350 [2024-07-14 01:19:40.581337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.350 [2024-07-14 01:19:40.596103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.350 [2024-07-14 01:19:40.596135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.350 [2024-07-14 01:19:40.596153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.350 [2024-07-14 01:19:40.609071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.350 [2024-07-14 01:19:40.609102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.350 [2024-07-14 01:19:40.609119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.350 [2024-07-14 01:19:40.621781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.350 [2024-07-14 01:19:40.621816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.350 [2024-07-14 01:19:40.621835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.350 [2024-07-14 01:19:40.636460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.350 [2024-07-14 01:19:40.636494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.350 [2024-07-14 01:19:40.636513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.350 [2024-07-14 01:19:40.649755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.350 [2024-07-14 01:19:40.649790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.350 [2024-07-14 01:19:40.649809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.350 [2024-07-14 01:19:40.662776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.350 [2024-07-14 01:19:40.662811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.350 [2024-07-14 01:19:40.662830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.350 [2024-07-14 01:19:40.676199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.350 [2024-07-14 01:19:40.676234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.350 [2024-07-14 01:19:40.676253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.350 [2024-07-14 01:19:40.689556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.350 [2024-07-14 01:19:40.689591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.350 [2024-07-14 01:19:40.689610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.351 [2024-07-14 01:19:40.702446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.351 [2024-07-14 01:19:40.702482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.351 [2024-07-14 01:19:40.702501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.351 [2024-07-14 01:19:40.716178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.351 [2024-07-14 01:19:40.716235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.351 [2024-07-14 01:19:40.716254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.351 [2024-07-14 01:19:40.730796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.351 [2024-07-14 01:19:40.730829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.351 [2024-07-14 01:19:40.730847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.351 [2024-07-14 01:19:40.741422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.351 [2024-07-14 01:19:40.741453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.351 [2024-07-14 01:19:40.741477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.351 [2024-07-14 01:19:40.754623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.351 [2024-07-14 01:19:40.754654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.351 [2024-07-14 01:19:40.754671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.767649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.767680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.767696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.779818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.779849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.779873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.792739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.792770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.792801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.804951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.804980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.804996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.815911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.815940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.815956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.829750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.829782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.829799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.841794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.841825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.841842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.854229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.854264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.854280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.867809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.867840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.867857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.880060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.880091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.880108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.891426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.891455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.891471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.904776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.904807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.904825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.918015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.918046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.918064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.928678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.928707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.928723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.942296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.942327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.942343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.954361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.954392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.954409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.967587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.967619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.967636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.978365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.978395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.978411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:40.992792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:40.992823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.610 [2024-07-14 01:19:40.992840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.610 [2024-07-14 01:19:41.006289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.610 [2024-07-14 01:19:41.006320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.611 [2024-07-14 01:19:41.006337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.611 [2024-07-14 01:19:41.018208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.611 [2024-07-14 01:19:41.018239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.611 [2024-07-14 01:19:41.018257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.868 [2024-07-14 01:19:41.029600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.868 [2024-07-14 01:19:41.029631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.868 [2024-07-14 01:19:41.029648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.868 [2024-07-14 01:19:41.043423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.868 [2024-07-14 01:19:41.043455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.868 [2024-07-14 01:19:41.043473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.868 [2024-07-14 01:19:41.056075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.868 [2024-07-14 01:19:41.056105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.868 [2024-07-14 01:19:41.056123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.868 [2024-07-14 01:19:41.066814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.868 [2024-07-14 01:19:41.066864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.868 [2024-07-14 01:19:41.066887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.868 [2024-07-14 01:19:41.080587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.868 [2024-07-14 01:19:41.080619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.868 [2024-07-14 01:19:41.080636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.868 [2024-07-14 01:19:41.095863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.868 [2024-07-14 01:19:41.095900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.868 [2024-07-14 01:19:41.095917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.868 [2024-07-14 01:19:41.107328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.868 [2024-07-14 01:19:41.107360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.868 [2024-07-14 01:19:41.107377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.868 [2024-07-14 01:19:41.119900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc6f9d0) 00:33:51.868 [2024-07-14 01:19:41.119931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.868 [2024-07-14 01:19:41.119948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.868 00:33:51.868 Latency(us) 00:33:51.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.868 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:51.868 nvme0n1 : 2.00 18878.03 73.74 0.00 0.00 6771.44 3203.98 18738.44 00:33:51.868 =================================================================================================================== 00:33:51.868 Total : 18878.03 73.74 0.00 0.00 6771.44 3203.98 18738.44 00:33:51.868 0 00:33:51.868 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:51.868 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:51.868 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:51.868 | .driver_specific 00:33:51.868 | .nvme_error 00:33:51.868 | .status_code 00:33:51.868 | .command_transient_transport_error' 00:33:51.868 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:52.127 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 148 > 0 )) 00:33:52.127 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1292802 00:33:52.127 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1292802 ']' 00:33:52.127 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1292802 00:33:52.127 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:52.127 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:52.127 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1292802 00:33:52.127 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:52.127 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:52.127 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1292802' 00:33:52.127 killing process with pid 1292802 00:33:52.127 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1292802 00:33:52.127 Received shutdown signal, test time was about 2.000000 seconds 00:33:52.127 00:33:52.127 Latency(us) 00:33:52.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.127 =================================================================================================================== 00:33:52.127 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:52.127 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1292802 00:33:52.386 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:52.386 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:52.386 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:52.386 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:52.386 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:52.386 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1293213 00:33:52.386 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:52.386 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1293213 /var/tmp/bperf.sock 00:33:52.386 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1293213 ']' 00:33:52.386 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:52.386 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:52.386 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:52.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:52.386 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:52.386 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:52.386 [2024-07-14 01:19:41.691189] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:52.386 [2024-07-14 01:19:41.691293] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293213 ] 00:33:52.386 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:52.386 Zero copy mechanism will not be used. 00:33:52.386 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.386 [2024-07-14 01:19:41.754928] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.643 [2024-07-14 01:19:41.841794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.643 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:52.643 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:52.643 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:52.643 01:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:52.900 01:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:52.900 01:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.900 01:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:52.900 01:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.900 01:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:52.900 01:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:53.159 nvme0n1 00:33:53.416 01:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:53.416 01:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:53.416 01:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:53.416 01:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:53.416 01:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:53.416 01:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:53.416 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:53.416 Zero copy mechanism will not be used. 00:33:53.416 Running I/O for 2 seconds... 00:33:53.416 [2024-07-14 01:19:42.717860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.416 [2024-07-14 01:19:42.717943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.416 [2024-07-14 01:19:42.717964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.416 [2024-07-14 01:19:42.730604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.416 [2024-07-14 01:19:42.730644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.416 [2024-07-14 01:19:42.730664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.416 [2024-07-14 01:19:42.743615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.416 [2024-07-14 01:19:42.743650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.416 [2024-07-14 01:19:42.743670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.416 [2024-07-14 01:19:42.756337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.416 [2024-07-14 01:19:42.756372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.416 [2024-07-14 01:19:42.756391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.416 [2024-07-14 01:19:42.769080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.416 [2024-07-14 01:19:42.769110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.416 [2024-07-14 01:19:42.769128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.416 [2024-07-14 01:19:42.781706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.416 [2024-07-14 01:19:42.781741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.416 [2024-07-14 01:19:42.781759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.416 [2024-07-14 01:19:42.794492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.416 [2024-07-14 01:19:42.794526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.416 [2024-07-14 01:19:42.794545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.416 [2024-07-14 01:19:42.807204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.416 [2024-07-14 01:19:42.807240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.416 [2024-07-14 01:19:42.807259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.416 [2024-07-14 01:19:42.819823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.416 [2024-07-14 01:19:42.819857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.416 [2024-07-14 01:19:42.819887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:42.831633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:42.831668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:42.831688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:42.845120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:42.845168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:42.845186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:42.857905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:42.857935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:42.857951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:42.870792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:42.870826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:42.870845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:42.883564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:42.883604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:42.883624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:42.896232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:42.896276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:42.896293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:42.909011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:42.909042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:42.909059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:42.921739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:42.921773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:42.921793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:42.934633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:42.934667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:42.934686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:42.947547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:42.947580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:42.947598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:42.959691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:42.959740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:42.959759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:42.972576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:42.972609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:42.972627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:42.985126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:42.985171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:42.985191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:42.997785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:42.997819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:42.997838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:43.010506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:43.010539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:43.010558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:43.023162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:43.023209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:43.023228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:43.035733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:43.035766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:43.035785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:43.048441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:43.048474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:43.048493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:43.061270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:43.061303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:43.061321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:43.074041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:43.074070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:43.074086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.675 [2024-07-14 01:19:43.086788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.675 [2024-07-14 01:19:43.086822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.675 [2024-07-14 01:19:43.086840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.933 [2024-07-14 01:19:43.099545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.933 [2024-07-14 01:19:43.099580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.933 [2024-07-14 01:19:43.099605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.933 [2024-07-14 01:19:43.112562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.933 [2024-07-14 01:19:43.112596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.933 [2024-07-14 01:19:43.112614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.933 [2024-07-14 01:19:43.125383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.933 [2024-07-14 01:19:43.125416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.933 [2024-07-14 01:19:43.125435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.933 [2024-07-14 01:19:43.138055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.933 [2024-07-14 01:19:43.138084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.933 [2024-07-14 01:19:43.138100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.933 [2024-07-14 01:19:43.150854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.933 [2024-07-14 01:19:43.150910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.933 [2024-07-14 01:19:43.150928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.933 [2024-07-14 01:19:43.163734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.933 [2024-07-14 01:19:43.163767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.933 [2024-07-14 01:19:43.163786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.933 [2024-07-14 01:19:43.176607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.933 [2024-07-14 01:19:43.176640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.933 [2024-07-14 01:19:43.176659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.933 [2024-07-14 01:19:43.189551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.933 [2024-07-14 01:19:43.189584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.933 [2024-07-14 01:19:43.189603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.933 [2024-07-14 01:19:43.202382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.933 [2024-07-14 01:19:43.202414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.933 [2024-07-14 01:19:43.202433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.933 [2024-07-14 01:19:43.215095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.933 [2024-07-14 01:19:43.215144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.933 [2024-07-14 01:19:43.215161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.933 [2024-07-14 01:19:43.227834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.933 [2024-07-14 01:19:43.227876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.933 [2024-07-14 01:19:43.227913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.933 [2024-07-14 01:19:43.240347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.933 [2024-07-14 01:19:43.240379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.933 [2024-07-14 01:19:43.240398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.933 [2024-07-14 01:19:43.253013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.933 [2024-07-14 01:19:43.253042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.933 [2024-07-14 01:19:43.253059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.933 [2024-07-14 01:19:43.265741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.934 [2024-07-14 01:19:43.265774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.934 [2024-07-14 01:19:43.265792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.934 [2024-07-14 01:19:43.278425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.934 [2024-07-14 01:19:43.278458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.934 [2024-07-14 01:19:43.278477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.934 [2024-07-14 01:19:43.290963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.934 [2024-07-14 01:19:43.291007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.934 [2024-07-14 01:19:43.291023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:53.934 [2024-07-14 01:19:43.303754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.934 [2024-07-14 01:19:43.303787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.934 [2024-07-14 01:19:43.303806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:53.934 [2024-07-14 01:19:43.316643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.934 [2024-07-14 01:19:43.316676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.934 [2024-07-14 01:19:43.316695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:53.934 [2024-07-14 01:19:43.329427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.934 [2024-07-14 01:19:43.329459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.934 [2024-07-14 01:19:43.329478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:53.934 [2024-07-14 01:19:43.342083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:53.934 [2024-07-14 01:19:43.342112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.934 [2024-07-14 01:19:43.342128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.354744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.354777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.354796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.367438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.367472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.367491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.380257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.380291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.380309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.392930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.392959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.392975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.405554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.405587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.405606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.418271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.418303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.418321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.431011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.431046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.431065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.443684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.443718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.443737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.456495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.456529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.456548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.469393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.469425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.469443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.482244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.482277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.482295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.494657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.494690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.494712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.507476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.507509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.507540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.520372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.520408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.520438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.533081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.533113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.533131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.545732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.545766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.545789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.558417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.558450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.558469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.571360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.571393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.571412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.583957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.584001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.584018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.191 [2024-07-14 01:19:43.596704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.191 [2024-07-14 01:19:43.596737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.191 [2024-07-14 01:19:43.596757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.609665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.609699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.609722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.622349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.622383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.622406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.635186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.635233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.635258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.648258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.648292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.648320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.660806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.660841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.660861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.673637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.673670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.673689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.686676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.686710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.686735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.699772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.699805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.699824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.712726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.712761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.712785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.725614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.725648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.725667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.738305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.738338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.738357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.750738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.750771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.750790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.763568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.763607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.763626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.776146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.776192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.776211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.788792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.788825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.788845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.801482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.801514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.801532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.814189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.814232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.814250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.827001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.827030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.827050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.839725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.839758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.839777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.449 [2024-07-14 01:19:43.852338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.449 [2024-07-14 01:19:43.852371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.449 [2024-07-14 01:19:43.852390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:43.865172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:43.865205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:43.865236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:43.878013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:43.878042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:43.878061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:43.890645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:43.890678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:43.890697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:43.903333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:43.903366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:43.903384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:43.916015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:43.916044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:43.916061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:43.928734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:43.928767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:43.928787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:43.941522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:43.941555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:43.941585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:43.954223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:43.954258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:43.954290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:43.967335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:43.967369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:43.967388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:43.980092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:43.980128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:43.980161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:43.992718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:43.992752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:43.992771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:44.005231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:44.005265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:44.005285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:44.018002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:44.018033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:44.018050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:44.030632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:44.030666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:44.030685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:44.043458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:44.043492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:44.043514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:44.056216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:44.056250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:44.056268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:44.069195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:44.069242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:44.069267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:44.081887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:44.081944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:44.081960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:44.094591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:44.094625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:44.094643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:44.107351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:44.107385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:44.107403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.710 [2024-07-14 01:19:44.120017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.710 [2024-07-14 01:19:44.120047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.710 [2024-07-14 01:19:44.120064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.972 [2024-07-14 01:19:44.132793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.972 [2024-07-14 01:19:44.132827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.972 [2024-07-14 01:19:44.132847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.972 [2024-07-14 01:19:44.145413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.972 [2024-07-14 01:19:44.145446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.972 [2024-07-14 01:19:44.145465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.972 [2024-07-14 01:19:44.158141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.972 [2024-07-14 01:19:44.158171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.972 [2024-07-14 01:19:44.158187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.972 [2024-07-14 01:19:44.170785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.972 [2024-07-14 01:19:44.170819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.972 [2024-07-14 01:19:44.170837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.972 [2024-07-14 01:19:44.183441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.972 [2024-07-14 01:19:44.183475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.972 [2024-07-14 01:19:44.183494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.972 [2024-07-14 01:19:44.196254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.972 [2024-07-14 01:19:44.196288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.972 [2024-07-14 01:19:44.196313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.972 [2024-07-14 01:19:44.208874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.972 [2024-07-14 01:19:44.208923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.972 [2024-07-14 01:19:44.208940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.972 [2024-07-14 01:19:44.221725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.972 [2024-07-14 01:19:44.221758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.972 [2024-07-14 01:19:44.221777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.972 [2024-07-14 01:19:44.234517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.972 [2024-07-14 01:19:44.234551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.972 [2024-07-14 01:19:44.234569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.972 [2024-07-14 01:19:44.247193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.972 [2024-07-14 01:19:44.247240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.972 [2024-07-14 01:19:44.247259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.972 [2024-07-14 01:19:44.259875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.972 [2024-07-14 01:19:44.259922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.972 [2024-07-14 01:19:44.259940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.972 [2024-07-14 01:19:44.272647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.972 [2024-07-14 01:19:44.272680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.972 [2024-07-14 01:19:44.272698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.972 [2024-07-14 01:19:44.285303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.972 [2024-07-14 01:19:44.285336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.972 [2024-07-14 01:19:44.285354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.972 [2024-07-14 01:19:44.298148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.972 [2024-07-14 01:19:44.298196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.972 [2024-07-14 01:19:44.298215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.972 [2024-07-14 01:19:44.311007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.972 [2024-07-14 01:19:44.311042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.972 [2024-07-14 01:19:44.311060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.972 [2024-07-14 01:19:44.323891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.972 [2024-07-14 01:19:44.323937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.972 [2024-07-14 01:19:44.323953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:54.972 [2024-07-14 01:19:44.336642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.973 [2024-07-14 01:19:44.336675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.973 [2024-07-14 01:19:44.336694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:54.973 [2024-07-14 01:19:44.349321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.973 [2024-07-14 01:19:44.349355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.973 [2024-07-14 01:19:44.349374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.973 [2024-07-14 01:19:44.362127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.973 [2024-07-14 01:19:44.362172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.973 [2024-07-14 01:19:44.362189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:54.973 [2024-07-14 01:19:44.374798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:54.973 [2024-07-14 01:19:44.374832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:54.973 [2024-07-14 01:19:44.374850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.233 [2024-07-14 01:19:44.387428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.233 [2024-07-14 01:19:44.387462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.233 [2024-07-14 01:19:44.387481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.233 [2024-07-14 01:19:44.400200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.233 [2024-07-14 01:19:44.400244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.233 [2024-07-14 01:19:44.400260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.233 [2024-07-14 01:19:44.412880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.233 [2024-07-14 01:19:44.412926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.233 [2024-07-14 01:19:44.412948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.233 [2024-07-14 01:19:44.425504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.233 [2024-07-14 01:19:44.425537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.233 [2024-07-14 01:19:44.425556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.233 [2024-07-14 01:19:44.438381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.233 [2024-07-14 01:19:44.438414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.233 [2024-07-14 01:19:44.438433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.233 [2024-07-14 01:19:44.450855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.233 [2024-07-14 01:19:44.450911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.233 [2024-07-14 01:19:44.450929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.233 [2024-07-14 01:19:44.463629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.233 [2024-07-14 01:19:44.463663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.233 [2024-07-14 01:19:44.463682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.233 [2024-07-14 01:19:44.476280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.233 [2024-07-14 01:19:44.476313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.233 [2024-07-14 01:19:44.476332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.233 [2024-07-14 01:19:44.489010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.233 [2024-07-14 01:19:44.489040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.234 [2024-07-14 01:19:44.489057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.234 [2024-07-14 01:19:44.501747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.234 [2024-07-14 01:19:44.501781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.234 [2024-07-14 01:19:44.501800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.234 [2024-07-14 01:19:44.514528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.234 [2024-07-14 01:19:44.514562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.234 [2024-07-14 01:19:44.514581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.234 [2024-07-14 01:19:44.527381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.234 [2024-07-14 01:19:44.527421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.234 [2024-07-14 01:19:44.527441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.234 [2024-07-14 01:19:44.540037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.234 [2024-07-14 01:19:44.540067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.234 [2024-07-14 01:19:44.540084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.234 [2024-07-14 01:19:44.552694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.234 [2024-07-14 01:19:44.552726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.234 [2024-07-14 01:19:44.552745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.234 [2024-07-14 01:19:44.565682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.234 [2024-07-14 01:19:44.565716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.234 [2024-07-14 01:19:44.565734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.234 [2024-07-14 01:19:44.578363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.234 [2024-07-14 01:19:44.578397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.234 [2024-07-14 01:19:44.578415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.234 [2024-07-14 01:19:44.591163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.234 [2024-07-14 01:19:44.591208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.234 [2024-07-14 01:19:44.591227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.234 [2024-07-14 01:19:44.603624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.234 [2024-07-14 01:19:44.603658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.234 [2024-07-14 01:19:44.603677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.234 [2024-07-14 01:19:44.616479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.234 [2024-07-14 01:19:44.616512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.234 [2024-07-14 01:19:44.616531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.234 [2024-07-14 01:19:44.629217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.234 [2024-07-14 01:19:44.629244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.234 [2024-07-14 01:19:44.629276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.234 [2024-07-14 01:19:44.642132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.234 [2024-07-14 01:19:44.642181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.234 [2024-07-14 01:19:44.642200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.493 [2024-07-14 01:19:44.654887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.493 [2024-07-14 01:19:44.654940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.493 [2024-07-14 01:19:44.654957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.493 [2024-07-14 01:19:44.667714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.493 [2024-07-14 01:19:44.667748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.493 [2024-07-14 01:19:44.667767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.493 [2024-07-14 01:19:44.680613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.493 [2024-07-14 01:19:44.680648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.493 [2024-07-14 01:19:44.680667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.493 [2024-07-14 01:19:44.693574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.493 [2024-07-14 01:19:44.693606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.493 [2024-07-14 01:19:44.693625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.493 [2024-07-14 01:19:44.706205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22a63d0) 00:33:55.493 [2024-07-14 01:19:44.706239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.493 [2024-07-14 01:19:44.706258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.493 00:33:55.493 Latency(us) 00:33:55.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.493 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:55.493 nvme0n1 : 2.01 2427.05 303.38 0.00 0.00 6585.40 5752.60 15146.10 00:33:55.493 =================================================================================================================== 00:33:55.493 Total : 2427.05 303.38 0.00 0.00 6585.40 5752.60 15146.10 00:33:55.493 0 00:33:55.493 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:55.493 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:55.493 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:55.493 | .driver_specific 00:33:55.493 | .nvme_error 00:33:55.493 | .status_code 00:33:55.493 | .command_transient_transport_error' 00:33:55.493 01:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:55.752 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 157 > 0 )) 00:33:55.752 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1293213 00:33:55.752 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1293213 ']' 00:33:55.752 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1293213 00:33:55.752 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:55.752 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:55.752 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1293213 00:33:55.752 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:55.752 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:55.752 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1293213' 00:33:55.752 killing process with pid 1293213 00:33:55.752 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1293213 00:33:55.752 Received shutdown signal, test time was about 2.000000 seconds 00:33:55.752 00:33:55.752 Latency(us) 00:33:55.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.752 =================================================================================================================== 00:33:55.752 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:55.752 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1293213 00:33:56.011 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:56.011 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:56.011 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:56.011 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:56.011 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:56.011 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1293732 00:33:56.011 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:56.011 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1293732 /var/tmp/bperf.sock 00:33:56.011 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1293732 ']' 00:33:56.011 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:56.011 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:56.011 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:56.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:56.011 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:56.011 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:56.011 [2024-07-14 01:19:45.308710] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:56.011 [2024-07-14 01:19:45.308805] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293732 ] 00:33:56.011 EAL: No free 2048 kB hugepages reported on node 1 00:33:56.011 [2024-07-14 01:19:45.366390] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.269 [2024-07-14 01:19:45.451566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:56.269 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:56.269 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:56.269 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:56.269 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:56.527 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:56.527 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.527 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:56.527 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.527 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:56.527 01:19:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:56.786 nvme0n1 00:33:56.786 01:19:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:56.786 01:19:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.786 01:19:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.046 01:19:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.047 01:19:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:57.047 01:19:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:57.047 Running I/O for 2 seconds... 00:33:57.047 [2024-07-14 01:19:46.343715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.047 [2024-07-14 01:19:46.344040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.047 [2024-07-14 01:19:46.344078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.047 [2024-07-14 01:19:46.357779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.047 [2024-07-14 01:19:46.358051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.047 [2024-07-14 01:19:46.358080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.047 [2024-07-14 01:19:46.372121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.047 [2024-07-14 01:19:46.372372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.047 [2024-07-14 01:19:46.372415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.047 [2024-07-14 01:19:46.386041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.047 [2024-07-14 01:19:46.386292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.047 [2024-07-14 01:19:46.386320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.047 [2024-07-14 01:19:46.399821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.047 [2024-07-14 01:19:46.400112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.047 [2024-07-14 01:19:46.400140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.047 [2024-07-14 01:19:46.413685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.047 [2024-07-14 01:19:46.413972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.047 [2024-07-14 01:19:46.414000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.047 [2024-07-14 01:19:46.427753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.047 [2024-07-14 01:19:46.428074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.047 [2024-07-14 01:19:46.428103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.047 [2024-07-14 01:19:46.441488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.047 [2024-07-14 01:19:46.441819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.047 [2024-07-14 01:19:46.441847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.047 [2024-07-14 01:19:46.455191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.047 [2024-07-14 01:19:46.455524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.047 [2024-07-14 01:19:46.455552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.308 [2024-07-14 01:19:46.469641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.308 [2024-07-14 01:19:46.470002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.308 [2024-07-14 01:19:46.470030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.308 [2024-07-14 01:19:46.483371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.308 [2024-07-14 01:19:46.483697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.308 [2024-07-14 01:19:46.483725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.308 [2024-07-14 01:19:46.497000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.308 [2024-07-14 01:19:46.497253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.308 [2024-07-14 01:19:46.497280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.308 [2024-07-14 01:19:46.510646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.308 [2024-07-14 01:19:46.510944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.308 [2024-07-14 01:19:46.510976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.308 [2024-07-14 01:19:46.524277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.308 [2024-07-14 01:19:46.524617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.308 [2024-07-14 01:19:46.524644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.308 [2024-07-14 01:19:46.537935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.308 [2024-07-14 01:19:46.538185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.308 [2024-07-14 01:19:46.538213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.308 [2024-07-14 01:19:46.551420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.308 [2024-07-14 01:19:46.551739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.308 [2024-07-14 01:19:46.551769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.308 [2024-07-14 01:19:46.565191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.308 [2024-07-14 01:19:46.565489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.308 [2024-07-14 01:19:46.565516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.308 [2024-07-14 01:19:46.578795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.308 [2024-07-14 01:19:46.579079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.308 [2024-07-14 01:19:46.579107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.308 [2024-07-14 01:19:46.592582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.308 [2024-07-14 01:19:46.592903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.308 [2024-07-14 01:19:46.592931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.308 [2024-07-14 01:19:46.606315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.308 [2024-07-14 01:19:46.606586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.308 [2024-07-14 01:19:46.606612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.308 [2024-07-14 01:19:46.620076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.308 [2024-07-14 01:19:46.620349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.308 [2024-07-14 01:19:46.620377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.308 [2024-07-14 01:19:46.633742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.308 [2024-07-14 01:19:46.634063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.308 [2024-07-14 01:19:46.634091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.308 [2024-07-14 01:19:46.647475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.308 [2024-07-14 01:19:46.647784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.308 [2024-07-14 01:19:46.647812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.308 [2024-07-14 01:19:46.661101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.308 [2024-07-14 01:19:46.661416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.308 [2024-07-14 01:19:46.661446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.308 [2024-07-14 01:19:46.674658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.308 [2024-07-14 01:19:46.674975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.308 [2024-07-14 01:19:46.675002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.308 [2024-07-14 01:19:46.688203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.308 [2024-07-14 01:19:46.688463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.308 [2024-07-14 01:19:46.688505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.308 [2024-07-14 01:19:46.701763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.308 [2024-07-14 01:19:46.702051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.309 [2024-07-14 01:19:46.702079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.309 [2024-07-14 01:19:46.715381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.309 [2024-07-14 01:19:46.715735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.309 [2024-07-14 01:19:46.715762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.729498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.729794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.729822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.743097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.743432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.743459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.756734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.757069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.757097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.770373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.770696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.770723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.783950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.784185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.784228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.797520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.797864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.797898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.811270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.811591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.811620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.824880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.825114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.825156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.838317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.838690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.838718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.851938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.852170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.852213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.865592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.865921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.865953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.879246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.879540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.879567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.892937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.893263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.893292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.906654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.906963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.906990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.920436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.920766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.920794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.934097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.934350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.934377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.947740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.948011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.948054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.961558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.961882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.961913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.568 [2024-07-14 01:19:46.975216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.568 [2024-07-14 01:19:46.975532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.568 [2024-07-14 01:19:46.975559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:46.989292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:46.989613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:46.989655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:47.002951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:47.003208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:47.003235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:47.016582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:47.016885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:47.016912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:47.030222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:47.030524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:47.030550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:47.043804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:47.044111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:47.044138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:47.057431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:47.057714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:47.057742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:47.071093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:47.071411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:47.071438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:47.084852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:47.085137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:47.085164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:47.098750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:47.099049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:47.099079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:47.112365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:47.112677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:47.112703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:47.125966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:47.126308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:47.126343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:47.139540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:47.139774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:47.139817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:47.153241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:47.153560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:47.153588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:47.166835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:47.167075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:47.167117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:47.180537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:47.180775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:47.180817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:47.194084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:47.194334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:47.194362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:47.207654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:47.207892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:47.207933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:47.221318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:47.221648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:47.221681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.854 [2024-07-14 01:19:47.234908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.854 [2024-07-14 01:19:47.235157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.854 [2024-07-14 01:19:47.235185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.855 [2024-07-14 01:19:47.248491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.855 [2024-07-14 01:19:47.248805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.855 [2024-07-14 01:19:47.248833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:57.855 [2024-07-14 01:19:47.262252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:57.855 [2024-07-14 01:19:47.262561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:57.855 [2024-07-14 01:19:47.262588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.114 [2024-07-14 01:19:47.276286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.114 [2024-07-14 01:19:47.276545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.114 [2024-07-14 01:19:47.276589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.114 [2024-07-14 01:19:47.289895] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.114 [2024-07-14 01:19:47.290142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.114 [2024-07-14 01:19:47.290169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.114 [2024-07-14 01:19:47.303464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.114 [2024-07-14 01:19:47.303721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.114 [2024-07-14 01:19:47.303764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.114 [2024-07-14 01:19:47.317075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.114 [2024-07-14 01:19:47.317393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.114 [2024-07-14 01:19:47.317420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.114 [2024-07-14 01:19:47.330952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.114 [2024-07-14 01:19:47.331200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.114 [2024-07-14 01:19:47.331227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.114 [2024-07-14 01:19:47.344946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.114 [2024-07-14 01:19:47.345221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.114 [2024-07-14 01:19:47.345247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.114 [2024-07-14 01:19:47.359246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.114 [2024-07-14 01:19:47.359526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.114 [2024-07-14 01:19:47.359554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.114 [2024-07-14 01:19:47.373403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.114 [2024-07-14 01:19:47.373728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.114 [2024-07-14 01:19:47.373757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.114 [2024-07-14 01:19:47.387525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.114 [2024-07-14 01:19:47.387845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.114 [2024-07-14 01:19:47.387882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.114 [2024-07-14 01:19:47.401928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.114 [2024-07-14 01:19:47.402187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.114 [2024-07-14 01:19:47.402215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.114 [2024-07-14 01:19:47.415877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.114 [2024-07-14 01:19:47.416140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.115 [2024-07-14 01:19:47.416166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.115 [2024-07-14 01:19:47.429624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.115 [2024-07-14 01:19:47.429935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.115 [2024-07-14 01:19:47.429962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.115 [2024-07-14 01:19:47.444135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.115 [2024-07-14 01:19:47.444471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.115 [2024-07-14 01:19:47.444498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.115 [2024-07-14 01:19:47.458522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.115 [2024-07-14 01:19:47.458851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.115 [2024-07-14 01:19:47.458912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.115 [2024-07-14 01:19:47.472365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.115 [2024-07-14 01:19:47.472637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.115 [2024-07-14 01:19:47.472663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.115 [2024-07-14 01:19:47.486321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.115 [2024-07-14 01:19:47.486630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.115 [2024-07-14 01:19:47.486673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.115 [2024-07-14 01:19:47.500142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.115 [2024-07-14 01:19:47.500419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.115 [2024-07-14 01:19:47.500444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.115 [2024-07-14 01:19:47.514002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.115 [2024-07-14 01:19:47.514256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.115 [2024-07-14 01:19:47.514283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.527985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.528262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.528289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.541880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.542129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.542156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.555405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.555704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.555730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.568993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.569253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.569280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.582689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.583033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.583064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.596257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.596547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.596573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.610054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.610310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.610339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.623778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.624062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.624090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.637402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.637647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.637677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.651188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.651481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.651507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.665020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.665359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.665385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.678696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.678980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.679006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.692309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.692616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.692642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.705901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.706148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.706179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.719416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.719733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.719759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.733057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.733387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.733414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.746645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.746909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.746935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.760148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.760457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.760483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.374 [2024-07-14 01:19:47.773726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.374 [2024-07-14 01:19:47.773984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.374 [2024-07-14 01:19:47.774010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.633 [2024-07-14 01:19:47.787656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.633 [2024-07-14 01:19:47.787992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-14 01:19:47.788020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.633 [2024-07-14 01:19:47.801633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.633 [2024-07-14 01:19:47.801892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-14 01:19:47.801935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.633 [2024-07-14 01:19:47.815188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.633 [2024-07-14 01:19:47.815482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-14 01:19:47.815508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.633 [2024-07-14 01:19:47.828861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.633 [2024-07-14 01:19:47.829114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.633 [2024-07-14 01:19:47.829141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.633 [2024-07-14 01:19:47.842351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.634 [2024-07-14 01:19:47.842614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.634 [2024-07-14 01:19:47.842640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.634 [2024-07-14 01:19:47.855880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.634 [2024-07-14 01:19:47.856164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.634 [2024-07-14 01:19:47.856191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.634 [2024-07-14 01:19:47.869523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.634 [2024-07-14 01:19:47.869797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.634 [2024-07-14 01:19:47.869823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.634 [2024-07-14 01:19:47.883125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.634 [2024-07-14 01:19:47.883404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.634 [2024-07-14 01:19:47.883430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.634 [2024-07-14 01:19:47.896710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.634 [2024-07-14 01:19:47.896986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.634 [2024-07-14 01:19:47.897027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.634 [2024-07-14 01:19:47.910286] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.634 [2024-07-14 01:19:47.910539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.634 [2024-07-14 01:19:47.910565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.634 [2024-07-14 01:19:47.923852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.634 [2024-07-14 01:19:47.924139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.634 [2024-07-14 01:19:47.924166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.634 [2024-07-14 01:19:47.937458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.634 [2024-07-14 01:19:47.937773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.634 [2024-07-14 01:19:47.937801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.634 [2024-07-14 01:19:47.951079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.634 [2024-07-14 01:19:47.951343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.634 [2024-07-14 01:19:47.951385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.634 [2024-07-14 01:19:47.964808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.634 [2024-07-14 01:19:47.965091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.634 [2024-07-14 01:19:47.965118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.634 [2024-07-14 01:19:47.978497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.634 [2024-07-14 01:19:47.978829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.634 [2024-07-14 01:19:47.978856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.634 [2024-07-14 01:19:47.992119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.634 [2024-07-14 01:19:47.992420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.634 [2024-07-14 01:19:47.992446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.634 [2024-07-14 01:19:48.005847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.634 [2024-07-14 01:19:48.006130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.634 [2024-07-14 01:19:48.006156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.634 [2024-07-14 01:19:48.019488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.634 [2024-07-14 01:19:48.019783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.634 [2024-07-14 01:19:48.019809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.634 [2024-07-14 01:19:48.033002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.634 [2024-07-14 01:19:48.033298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.634 [2024-07-14 01:19:48.033325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.634 [2024-07-14 01:19:48.047006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.893 [2024-07-14 01:19:48.047353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.893 [2024-07-14 01:19:48.047381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.893 [2024-07-14 01:19:48.060766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.893 [2024-07-14 01:19:48.061052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.893 [2024-07-14 01:19:48.061084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.893 [2024-07-14 01:19:48.074432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.893 [2024-07-14 01:19:48.074736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.893 [2024-07-14 01:19:48.074780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.893 [2024-07-14 01:19:48.088130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.893 [2024-07-14 01:19:48.088443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.893 [2024-07-14 01:19:48.088470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.893 [2024-07-14 01:19:48.101728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.893 [2024-07-14 01:19:48.102021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.893 [2024-07-14 01:19:48.102063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.893 [2024-07-14 01:19:48.115338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.893 [2024-07-14 01:19:48.115636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.893 [2024-07-14 01:19:48.115663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.893 [2024-07-14 01:19:48.128981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.893 [2024-07-14 01:19:48.129275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.893 [2024-07-14 01:19:48.129318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.893 [2024-07-14 01:19:48.142506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.893 [2024-07-14 01:19:48.142837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.893 [2024-07-14 01:19:48.142862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.893 [2024-07-14 01:19:48.156152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.893 [2024-07-14 01:19:48.156389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.893 [2024-07-14 01:19:48.156416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.893 [2024-07-14 01:19:48.169821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.893 [2024-07-14 01:19:48.170113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.893 [2024-07-14 01:19:48.170139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.893 [2024-07-14 01:19:48.183513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.893 [2024-07-14 01:19:48.183772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.893 [2024-07-14 01:19:48.183799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.893 [2024-07-14 01:19:48.197148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.893 [2024-07-14 01:19:48.197458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.893 [2024-07-14 01:19:48.197484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.893 [2024-07-14 01:19:48.210770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.893 [2024-07-14 01:19:48.211042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.893 [2024-07-14 01:19:48.211067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.893 [2024-07-14 01:19:48.224388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.894 [2024-07-14 01:19:48.224689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.894 [2024-07-14 01:19:48.224715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.894 [2024-07-14 01:19:48.237980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.894 [2024-07-14 01:19:48.238294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.894 [2024-07-14 01:19:48.238320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.894 [2024-07-14 01:19:48.251568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.894 [2024-07-14 01:19:48.251875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.894 [2024-07-14 01:19:48.251901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.894 [2024-07-14 01:19:48.265176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.894 [2024-07-14 01:19:48.265504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.894 [2024-07-14 01:19:48.265532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.894 [2024-07-14 01:19:48.278824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.894 [2024-07-14 01:19:48.279109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.894 [2024-07-14 01:19:48.279135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.894 [2024-07-14 01:19:48.292360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.894 [2024-07-14 01:19:48.292677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.894 [2024-07-14 01:19:48.292705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.894 [2024-07-14 01:19:48.306314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:58.894 [2024-07-14 01:19:48.306665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.894 [2024-07-14 01:19:48.306693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:59.152 [2024-07-14 01:19:48.320214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184d990) with pdu=0x2000190fe720 00:33:59.152 [2024-07-14 01:19:48.320524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:59.153 [2024-07-14 01:19:48.320550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:59.153 00:33:59.153 Latency(us) 00:33:59.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:59.153 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:59.153 nvme0n1 : 2.01 18537.17 72.41 0.00 0.00 6888.42 6456.51 16214.09 00:33:59.153 =================================================================================================================== 00:33:59.153 Total : 18537.17 72.41 0.00 0.00 6888.42 6456.51 16214.09 00:33:59.153 0 00:33:59.153 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:59.153 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:59.153 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:59.153 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:59.153 | .driver_specific 00:33:59.153 | .nvme_error 00:33:59.153 | .status_code 00:33:59.153 | .command_transient_transport_error' 00:33:59.413 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:33:59.413 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1293732 00:33:59.413 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1293732 ']' 00:33:59.413 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1293732 00:33:59.413 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:59.413 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:59.413 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1293732 00:33:59.413 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:59.413 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:59.413 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1293732' 00:33:59.413 killing process with pid 1293732 00:33:59.413 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1293732 00:33:59.413 Received shutdown signal, test time was about 2.000000 seconds 00:33:59.413 00:33:59.413 Latency(us) 00:33:59.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:59.413 =================================================================================================================== 00:33:59.413 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:59.413 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1293732 00:33:59.674 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:59.674 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:59.674 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:59.674 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:59.674 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:59.674 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1294140 00:33:59.674 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:59.674 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1294140 /var/tmp/bperf.sock 00:33:59.674 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1294140 ']' 00:33:59.674 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:59.674 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:59.674 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:59.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:59.674 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:59.674 01:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:59.674 [2024-07-14 01:19:48.901037] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:59.674 [2024-07-14 01:19:48.901131] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294140 ] 00:33:59.674 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:59.674 Zero copy mechanism will not be used. 00:33:59.674 EAL: No free 2048 kB hugepages reported on node 1 00:33:59.674 [2024-07-14 01:19:48.963147] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:59.674 [2024-07-14 01:19:49.050626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:59.934 01:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:59.934 01:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:59.934 01:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:59.934 01:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:00.192 01:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:00.192 01:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.192 01:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:00.192 01:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.192 01:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:00.192 01:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:00.759 nvme0n1 00:34:00.759 01:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:00.759 01:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:00.759 01:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:00.759 01:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:00.759 01:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:00.759 01:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:00.759 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:00.759 Zero copy mechanism will not be used. 00:34:00.759 Running I/O for 2 seconds... 00:34:00.759 [2024-07-14 01:19:50.055164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:00.759 [2024-07-14 01:19:50.055738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.759 [2024-07-14 01:19:50.055784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:00.759 [2024-07-14 01:19:50.080814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:00.759 [2024-07-14 01:19:50.081389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.759 [2024-07-14 01:19:50.081426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:00.759 [2024-07-14 01:19:50.103899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:00.759 [2024-07-14 01:19:50.104451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.759 [2024-07-14 01:19:50.104484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:00.759 [2024-07-14 01:19:50.130459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:00.759 [2024-07-14 01:19:50.131291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.759 [2024-07-14 01:19:50.131324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.759 [2024-07-14 01:19:50.157647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:00.759 [2024-07-14 01:19:50.158285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.759 [2024-07-14 01:19:50.158318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.018 [2024-07-14 01:19:50.184359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.018 [2024-07-14 01:19:50.184883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.018 [2024-07-14 01:19:50.184927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.018 [2024-07-14 01:19:50.211368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.018 [2024-07-14 01:19:50.212001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.018 [2024-07-14 01:19:50.212031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.018 [2024-07-14 01:19:50.238906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.018 [2024-07-14 01:19:50.239440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.018 [2024-07-14 01:19:50.239469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.018 [2024-07-14 01:19:50.266149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.018 [2024-07-14 01:19:50.266909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.018 [2024-07-14 01:19:50.266954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.018 [2024-07-14 01:19:50.289849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.018 [2024-07-14 01:19:50.290298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.018 [2024-07-14 01:19:50.290326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.018 [2024-07-14 01:19:50.316731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.018 [2024-07-14 01:19:50.317213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.018 [2024-07-14 01:19:50.317242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.018 [2024-07-14 01:19:50.340583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.018 [2024-07-14 01:19:50.341188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.018 [2024-07-14 01:19:50.341218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.018 [2024-07-14 01:19:50.368198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.018 [2024-07-14 01:19:50.368911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.018 [2024-07-14 01:19:50.368941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.018 [2024-07-14 01:19:50.393210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.018 [2024-07-14 01:19:50.393785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.018 [2024-07-14 01:19:50.393813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.018 [2024-07-14 01:19:50.419280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.018 [2024-07-14 01:19:50.419725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.019 [2024-07-14 01:19:50.419753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.278 [2024-07-14 01:19:50.445239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.278 [2024-07-14 01:19:50.445679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.279 [2024-07-14 01:19:50.445708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.279 [2024-07-14 01:19:50.472292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.279 [2024-07-14 01:19:50.472782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.279 [2024-07-14 01:19:50.472817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.279 [2024-07-14 01:19:50.499050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.279 [2024-07-14 01:19:50.499729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.279 [2024-07-14 01:19:50.499759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.279 [2024-07-14 01:19:50.525590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.279 [2024-07-14 01:19:50.526173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.279 [2024-07-14 01:19:50.526204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.279 [2024-07-14 01:19:50.550797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.279 [2024-07-14 01:19:50.551397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.279 [2024-07-14 01:19:50.551426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.279 [2024-07-14 01:19:50.575670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.279 [2024-07-14 01:19:50.576230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.279 [2024-07-14 01:19:50.576259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.279 [2024-07-14 01:19:50.602251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.279 [2024-07-14 01:19:50.602889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.279 [2024-07-14 01:19:50.602919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.279 [2024-07-14 01:19:50.626205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.279 [2024-07-14 01:19:50.626803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.279 [2024-07-14 01:19:50.626832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.279 [2024-07-14 01:19:50.652609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.279 [2024-07-14 01:19:50.653384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.279 [2024-07-14 01:19:50.653413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.279 [2024-07-14 01:19:50.677035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.279 [2024-07-14 01:19:50.677537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.279 [2024-07-14 01:19:50.677571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.541 [2024-07-14 01:19:50.703281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.541 [2024-07-14 01:19:50.703698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.541 [2024-07-14 01:19:50.703728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.541 [2024-07-14 01:19:50.730750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.541 [2024-07-14 01:19:50.731342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.541 [2024-07-14 01:19:50.731370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.541 [2024-07-14 01:19:50.758025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.541 [2024-07-14 01:19:50.758700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.541 [2024-07-14 01:19:50.758728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.541 [2024-07-14 01:19:50.785630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.541 [2024-07-14 01:19:50.786141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.541 [2024-07-14 01:19:50.786181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.541 [2024-07-14 01:19:50.812042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.541 [2024-07-14 01:19:50.812686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.541 [2024-07-14 01:19:50.812723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.541 [2024-07-14 01:19:50.839744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.541 [2024-07-14 01:19:50.840556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.541 [2024-07-14 01:19:50.840595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.541 [2024-07-14 01:19:50.865412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.541 [2024-07-14 01:19:50.866039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.541 [2024-07-14 01:19:50.866069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.541 [2024-07-14 01:19:50.892813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.541 [2024-07-14 01:19:50.893241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.541 [2024-07-14 01:19:50.893286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.541 [2024-07-14 01:19:50.919940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.541 [2024-07-14 01:19:50.920572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.541 [2024-07-14 01:19:50.920601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.541 [2024-07-14 01:19:50.947710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.541 [2024-07-14 01:19:50.948185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.541 [2024-07-14 01:19:50.948217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.804 [2024-07-14 01:19:50.974391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.804 [2024-07-14 01:19:50.974921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.804 [2024-07-14 01:19:50.974959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.804 [2024-07-14 01:19:51.002229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.804 [2024-07-14 01:19:51.002719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.804 [2024-07-14 01:19:51.002750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.804 [2024-07-14 01:19:51.029717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.804 [2024-07-14 01:19:51.030243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.804 [2024-07-14 01:19:51.030274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.804 [2024-07-14 01:19:51.053967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.804 [2024-07-14 01:19:51.054509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.804 [2024-07-14 01:19:51.054537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.804 [2024-07-14 01:19:51.081041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.804 [2024-07-14 01:19:51.081620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.804 [2024-07-14 01:19:51.081647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.805 [2024-07-14 01:19:51.107374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.805 [2024-07-14 01:19:51.108070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.805 [2024-07-14 01:19:51.108103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:01.805 [2024-07-14 01:19:51.135416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.805 [2024-07-14 01:19:51.136104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.805 [2024-07-14 01:19:51.136134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:01.805 [2024-07-14 01:19:51.162966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.805 [2024-07-14 01:19:51.163760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.805 [2024-07-14 01:19:51.163803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:01.805 [2024-07-14 01:19:51.189031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.805 [2024-07-14 01:19:51.189613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.805 [2024-07-14 01:19:51.189643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:01.805 [2024-07-14 01:19:51.211597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:01.805 [2024-07-14 01:19:51.212068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.805 [2024-07-14 01:19:51.212104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.065 [2024-07-14 01:19:51.232852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.065 [2024-07-14 01:19:51.233297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.065 [2024-07-14 01:19:51.233326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.065 [2024-07-14 01:19:51.258253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.065 [2024-07-14 01:19:51.258634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.065 [2024-07-14 01:19:51.258664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.065 [2024-07-14 01:19:51.281682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.065 [2024-07-14 01:19:51.282394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.065 [2024-07-14 01:19:51.282424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.065 [2024-07-14 01:19:51.306940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.065 [2024-07-14 01:19:51.307454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.065 [2024-07-14 01:19:51.307483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.065 [2024-07-14 01:19:51.331459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.065 [2024-07-14 01:19:51.332066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.065 [2024-07-14 01:19:51.332096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.065 [2024-07-14 01:19:51.356790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.065 [2024-07-14 01:19:51.357326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.065 [2024-07-14 01:19:51.357361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.065 [2024-07-14 01:19:51.383476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.065 [2024-07-14 01:19:51.384024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.065 [2024-07-14 01:19:51.384054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.065 [2024-07-14 01:19:51.409851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.065 [2024-07-14 01:19:51.410466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.065 [2024-07-14 01:19:51.410495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.065 [2024-07-14 01:19:51.434428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.065 [2024-07-14 01:19:51.435203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.065 [2024-07-14 01:19:51.435232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.065 [2024-07-14 01:19:51.460171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.065 [2024-07-14 01:19:51.460611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.065 [2024-07-14 01:19:51.460640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.324 [2024-07-14 01:19:51.486945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.324 [2024-07-14 01:19:51.487605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.324 [2024-07-14 01:19:51.487635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.324 [2024-07-14 01:19:51.514001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.324 [2024-07-14 01:19:51.514683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.324 [2024-07-14 01:19:51.514713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.324 [2024-07-14 01:19:51.540558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.324 [2024-07-14 01:19:51.541113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.324 [2024-07-14 01:19:51.541142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.324 [2024-07-14 01:19:51.567022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.324 [2024-07-14 01:19:51.567475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.324 [2024-07-14 01:19:51.567504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.324 [2024-07-14 01:19:51.594119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.324 [2024-07-14 01:19:51.594748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.324 [2024-07-14 01:19:51.594777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.324 [2024-07-14 01:19:51.617450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.324 [2024-07-14 01:19:51.617830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.324 [2024-07-14 01:19:51.617891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.324 [2024-07-14 01:19:51.642978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.324 [2024-07-14 01:19:51.643501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.324 [2024-07-14 01:19:51.643530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.324 [2024-07-14 01:19:51.668831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.324 [2024-07-14 01:19:51.669537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.324 [2024-07-14 01:19:51.669565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.324 [2024-07-14 01:19:51.693519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.324 [2024-07-14 01:19:51.693984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.324 [2024-07-14 01:19:51.694016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.324 [2024-07-14 01:19:51.719298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.324 [2024-07-14 01:19:51.719728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.324 [2024-07-14 01:19:51.719757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.583 [2024-07-14 01:19:51.746338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.583 [2024-07-14 01:19:51.746830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.583 [2024-07-14 01:19:51.746898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.583 [2024-07-14 01:19:51.773568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.583 [2024-07-14 01:19:51.774109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.583 [2024-07-14 01:19:51.774149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.583 [2024-07-14 01:19:51.799647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.583 [2024-07-14 01:19:51.800403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.583 [2024-07-14 01:19:51.800436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.583 [2024-07-14 01:19:51.827621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.583 [2024-07-14 01:19:51.828397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.583 [2024-07-14 01:19:51.828426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.583 [2024-07-14 01:19:51.854587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.583 [2024-07-14 01:19:51.855127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.583 [2024-07-14 01:19:51.855171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.583 [2024-07-14 01:19:51.880152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.583 [2024-07-14 01:19:51.880720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.583 [2024-07-14 01:19:51.880748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.583 [2024-07-14 01:19:51.905652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.583 [2024-07-14 01:19:51.906236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.583 [2024-07-14 01:19:51.906267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.583 [2024-07-14 01:19:51.928461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.583 [2024-07-14 01:19:51.929025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.583 [2024-07-14 01:19:51.929056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.583 [2024-07-14 01:19:51.952719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.583 [2024-07-14 01:19:51.953264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.583 [2024-07-14 01:19:51.953297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.583 [2024-07-14 01:19:51.980056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.583 [2024-07-14 01:19:51.980629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.583 [2024-07-14 01:19:51.980658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.841 [2024-07-14 01:19:52.007389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.841 [2024-07-14 01:19:52.007825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.841 [2024-07-14 01:19:52.007877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.841 [2024-07-14 01:19:52.033028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x184dcd0) with pdu=0x2000190fef90 00:34:02.841 [2024-07-14 01:19:52.033553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.841 [2024-07-14 01:19:52.033582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.841 00:34:02.841 Latency(us) 00:34:02.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.841 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:02.841 nvme0n1 : 2.01 1191.15 148.89 0.00 0.00 13383.79 6699.24 28932.93 00:34:02.841 =================================================================================================================== 00:34:02.841 Total : 1191.15 148.89 0.00 0.00 13383.79 6699.24 28932.93 00:34:02.841 0 00:34:02.841 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:02.841 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:02.841 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:02.841 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:02.841 | .driver_specific 00:34:02.841 | .nvme_error 00:34:02.841 | .status_code 00:34:02.841 | .command_transient_transport_error' 00:34:03.099 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 77 > 0 )) 00:34:03.099 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1294140 00:34:03.099 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1294140 ']' 00:34:03.099 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1294140 00:34:03.099 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:03.099 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:03.099 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1294140 00:34:03.099 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:03.099 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:03.099 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1294140' 00:34:03.099 killing process with pid 1294140 00:34:03.099 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1294140 00:34:03.099 Received shutdown signal, test time was about 2.000000 seconds 00:34:03.099 00:34:03.099 Latency(us) 00:34:03.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.099 =================================================================================================================== 00:34:03.099 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:03.099 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1294140 00:34:03.357 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1292776 00:34:03.357 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1292776 ']' 00:34:03.357 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1292776 00:34:03.357 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:34:03.357 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:03.357 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1292776 00:34:03.357 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:03.357 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:03.357 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1292776' 00:34:03.357 killing process with pid 1292776 00:34:03.357 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1292776 00:34:03.357 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1292776 00:34:03.614 00:34:03.614 real 0m15.260s 00:34:03.614 user 0m30.689s 00:34:03.614 sys 0m3.903s 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:03.614 ************************************ 00:34:03.614 END TEST nvmf_digest_error 00:34:03.614 ************************************ 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:03.614 rmmod nvme_tcp 00:34:03.614 rmmod nvme_fabrics 00:34:03.614 rmmod nvme_keyring 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1292776 ']' 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1292776 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1292776 ']' 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1292776 00:34:03.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1292776) - No such process 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1292776 is not found' 00:34:03.614 Process with pid 1292776 is not found 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:03.614 01:19:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.518 01:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:05.518 00:34:05.518 real 0m34.653s 00:34:05.518 user 1m1.788s 00:34:05.518 sys 0m9.239s 00:34:05.518 01:19:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:05.518 01:19:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:05.518 ************************************ 00:34:05.518 END TEST nvmf_digest 00:34:05.518 ************************************ 00:34:05.776 01:19:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:05.776 01:19:54 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:34:05.776 01:19:54 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:34:05.776 01:19:54 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:34:05.776 01:19:54 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:05.776 01:19:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:05.776 01:19:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:05.776 01:19:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:05.776 ************************************ 00:34:05.776 START TEST nvmf_bdevperf 00:34:05.776 ************************************ 00:34:05.776 01:19:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:05.776 * Looking for test storage... 00:34:05.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:05.776 01:19:55 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:34:05.777 01:19:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:07.680 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:07.680 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:07.680 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:07.680 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:07.680 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:07.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:07.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:34:07.939 00:34:07.939 --- 10.0.0.2 ping statistics --- 00:34:07.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.939 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:07.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:07.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:34:07.939 00:34:07.939 --- 10.0.0.1 ping statistics --- 00:34:07.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.939 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1296487 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1296487 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1296487 ']' 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:07.939 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:07.939 [2024-07-14 01:19:57.233451] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:07.939 [2024-07-14 01:19:57.233536] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:07.939 EAL: No free 2048 kB hugepages reported on node 1 00:34:07.939 [2024-07-14 01:19:57.303472] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:08.198 [2024-07-14 01:19:57.398439] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:08.198 [2024-07-14 01:19:57.398505] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:08.198 [2024-07-14 01:19:57.398522] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:08.198 [2024-07-14 01:19:57.398536] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:08.198 [2024-07-14 01:19:57.398547] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:08.198 [2024-07-14 01:19:57.398636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:08.198 [2024-07-14 01:19:57.398690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:08.198 [2024-07-14 01:19:57.398693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.198 [2024-07-14 01:19:57.545637] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.198 Malloc0 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:08.198 [2024-07-14 01:19:57.606519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:08.198 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:08.459 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:08.459 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:08.459 { 00:34:08.459 "params": { 00:34:08.459 "name": "Nvme$subsystem", 00:34:08.459 "trtype": "$TEST_TRANSPORT", 00:34:08.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.459 "adrfam": "ipv4", 00:34:08.459 "trsvcid": "$NVMF_PORT", 00:34:08.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.459 "hdgst": ${hdgst:-false}, 00:34:08.459 "ddgst": ${ddgst:-false} 00:34:08.459 }, 00:34:08.459 "method": "bdev_nvme_attach_controller" 00:34:08.459 } 00:34:08.459 EOF 00:34:08.459 )") 00:34:08.459 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:08.459 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:08.459 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:08.459 01:19:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:08.459 "params": { 00:34:08.459 "name": "Nvme1", 00:34:08.459 "trtype": "tcp", 00:34:08.459 "traddr": "10.0.0.2", 00:34:08.459 "adrfam": "ipv4", 00:34:08.459 "trsvcid": "4420", 00:34:08.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:08.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:08.459 "hdgst": false, 00:34:08.459 "ddgst": false 00:34:08.459 }, 00:34:08.459 "method": "bdev_nvme_attach_controller" 00:34:08.459 }' 00:34:08.459 [2024-07-14 01:19:57.654984] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:08.459 [2024-07-14 01:19:57.655060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296554 ] 00:34:08.459 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.459 [2024-07-14 01:19:57.717733] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.459 [2024-07-14 01:19:57.808257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.025 Running I/O for 1 seconds... 00:34:09.962 00:34:09.962 Latency(us) 00:34:09.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.962 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:09.962 Verification LBA range: start 0x0 length 0x4000 00:34:09.962 Nvme1n1 : 1.01 8073.25 31.54 0.00 0.00 15757.35 2196.67 15825.73 00:34:09.962 =================================================================================================================== 00:34:09.962 Total : 8073.25 31.54 0.00 0.00 15757.35 2196.67 15825.73 00:34:10.221 01:19:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1296775 00:34:10.221 01:19:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:10.221 01:19:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:10.221 01:19:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:10.221 01:19:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:10.221 01:19:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:10.221 01:19:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:10.221 01:19:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:10.221 { 00:34:10.221 "params": { 00:34:10.221 "name": "Nvme$subsystem", 00:34:10.221 "trtype": "$TEST_TRANSPORT", 00:34:10.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.221 "adrfam": "ipv4", 00:34:10.221 "trsvcid": "$NVMF_PORT", 00:34:10.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.221 "hdgst": ${hdgst:-false}, 00:34:10.221 "ddgst": ${ddgst:-false} 00:34:10.221 }, 00:34:10.221 "method": "bdev_nvme_attach_controller" 00:34:10.221 } 00:34:10.221 EOF 00:34:10.221 )") 00:34:10.221 01:19:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:10.221 01:19:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:10.221 01:19:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:10.221 01:19:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:10.221 "params": { 00:34:10.221 "name": "Nvme1", 00:34:10.221 "trtype": "tcp", 00:34:10.221 "traddr": "10.0.0.2", 00:34:10.221 "adrfam": "ipv4", 00:34:10.221 "trsvcid": "4420", 00:34:10.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:10.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:10.221 "hdgst": false, 00:34:10.221 "ddgst": false 00:34:10.221 }, 00:34:10.221 "method": "bdev_nvme_attach_controller" 00:34:10.221 }' 00:34:10.221 [2024-07-14 01:19:59.434502] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:10.221 [2024-07-14 01:19:59.434577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296775 ] 00:34:10.221 EAL: No free 2048 kB hugepages reported on node 1 00:34:10.221 [2024-07-14 01:19:59.493571] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.221 [2024-07-14 01:19:59.581426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.480 Running I/O for 15 seconds... 00:34:13.015 01:20:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1296487 00:34:13.015 01:20:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:13.015 [2024-07-14 01:20:02.404700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.404754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.404792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.404810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.404830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.404846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.404864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.404915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.404934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.404948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.404963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:52176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.404978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.404995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:52224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:52280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:52296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.015 [2024-07-14 01:20:02.405593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.015 [2024-07-14 01:20:02.405624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:52352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.405983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.405996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.406011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.406025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.406040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.406053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.015 [2024-07-14 01:20:02.406068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.015 [2024-07-14 01:20:02.406081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:52520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:52536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:52544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.406975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.406992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.407008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.407022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.407038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.407052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.407066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.407080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.407095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.407108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.407123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.407137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.407166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.407179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.407192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:52704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.407205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.407237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.407252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.407268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.407283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.407300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.407315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.407332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.407347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.407363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.407378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.407395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.407414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.407431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.407446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.407463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.016 [2024-07-14 01:20:02.407478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.016 [2024-07-14 01:20:02.407495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.407510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.407527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.407542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.407559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.407574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.407591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.407606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.407622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.407638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.407655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.407670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.407686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.407702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.407718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:52832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.407733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.407750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.407765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.407782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.407797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.407817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.407833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.407849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.407864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.407891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.407906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.407938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.407951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.407966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.407980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.407995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.017 [2024-07-14 01:20:02.408554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.017 [2024-07-14 01:20:02.408587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.017 [2024-07-14 01:20:02.408622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.017 [2024-07-14 01:20:02.408659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.017 [2024-07-14 01:20:02.408692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:13.017 [2024-07-14 01:20:02.408724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.017 [2024-07-14 01:20:02.408853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.017 [2024-07-14 01:20:02.408878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.018 [2024-07-14 01:20:02.408896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.018 [2024-07-14 01:20:02.408928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.018 [2024-07-14 01:20:02.408944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.018 [2024-07-14 01:20:02.408960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:13.018 [2024-07-14 01:20:02.408974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.018 [2024-07-14 01:20:02.408988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9201f0 is same with the state(5) to be set 00:34:13.018 [2024-07-14 01:20:02.409005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:13.018 [2024-07-14 01:20:02.409017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:13.018 [2024-07-14 01:20:02.409029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53088 len:8 PRP1 0x0 PRP2 0x0 00:34:13.018 [2024-07-14 01:20:02.409042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.018 [2024-07-14 01:20:02.409105] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9201f0 was disconnected and freed. reset controller. 00:34:13.018 [2024-07-14 01:20:02.409194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.018 [2024-07-14 01:20:02.409231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.018 [2024-07-14 01:20:02.409255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.018 [2024-07-14 01:20:02.409272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.018 [2024-07-14 01:20:02.409287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.018 [2024-07-14 01:20:02.409301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.018 [2024-07-14 01:20:02.409331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.018 [2024-07-14 01:20:02.409345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.018 [2024-07-14 01:20:02.409359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.018 [2024-07-14 01:20:02.413171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.018 [2024-07-14 01:20:02.413208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.018 [2024-07-14 01:20:02.413946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.018 [2024-07-14 01:20:02.413975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.018 [2024-07-14 01:20:02.413993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.018 [2024-07-14 01:20:02.414232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.018 [2024-07-14 01:20:02.414474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.018 [2024-07-14 01:20:02.414498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.018 [2024-07-14 01:20:02.414518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.018 [2024-07-14 01:20:02.418071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.314 [2024-07-14 01:20:02.427558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.314 [2024-07-14 01:20:02.428028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.314 [2024-07-14 01:20:02.428073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.314 [2024-07-14 01:20:02.428093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.314 [2024-07-14 01:20:02.428333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.314 [2024-07-14 01:20:02.428574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.314 [2024-07-14 01:20:02.428598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.314 [2024-07-14 01:20:02.428613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.314 [2024-07-14 01:20:02.432187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.314 [2024-07-14 01:20:02.441536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.314 [2024-07-14 01:20:02.441998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.314 [2024-07-14 01:20:02.442030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.314 [2024-07-14 01:20:02.442049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.314 [2024-07-14 01:20:02.442286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.314 [2024-07-14 01:20:02.442528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.314 [2024-07-14 01:20:02.442551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.314 [2024-07-14 01:20:02.442566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.314 [2024-07-14 01:20:02.446136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.314 [2024-07-14 01:20:02.455367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.314 [2024-07-14 01:20:02.455835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.314 [2024-07-14 01:20:02.455875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.314 [2024-07-14 01:20:02.455896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.314 [2024-07-14 01:20:02.456134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.314 [2024-07-14 01:20:02.456375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.314 [2024-07-14 01:20:02.456398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.314 [2024-07-14 01:20:02.456413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.314 [2024-07-14 01:20:02.459989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.314 [2024-07-14 01:20:02.469220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.314 [2024-07-14 01:20:02.469693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.314 [2024-07-14 01:20:02.469724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.314 [2024-07-14 01:20:02.469742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.314 [2024-07-14 01:20:02.469992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.314 [2024-07-14 01:20:02.470234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.314 [2024-07-14 01:20:02.470257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.314 [2024-07-14 01:20:02.470272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.314 [2024-07-14 01:20:02.473824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.314 [2024-07-14 01:20:02.483061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.314 [2024-07-14 01:20:02.483528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.314 [2024-07-14 01:20:02.483559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.314 [2024-07-14 01:20:02.483577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.314 [2024-07-14 01:20:02.483820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.314 [2024-07-14 01:20:02.484073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.314 [2024-07-14 01:20:02.484097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.314 [2024-07-14 01:20:02.484112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.314 [2024-07-14 01:20:02.487664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.314 [2024-07-14 01:20:02.496897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.314 [2024-07-14 01:20:02.497373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.314 [2024-07-14 01:20:02.497404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.314 [2024-07-14 01:20:02.497421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.314 [2024-07-14 01:20:02.497659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.314 [2024-07-14 01:20:02.497911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.314 [2024-07-14 01:20:02.497935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.314 [2024-07-14 01:20:02.497950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.314 [2024-07-14 01:20:02.501501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.314 [2024-07-14 01:20:02.510724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.314 [2024-07-14 01:20:02.511187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.314 [2024-07-14 01:20:02.511217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.314 [2024-07-14 01:20:02.511235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.314 [2024-07-14 01:20:02.511472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.314 [2024-07-14 01:20:02.511713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.314 [2024-07-14 01:20:02.511736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.314 [2024-07-14 01:20:02.511751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.314 [2024-07-14 01:20:02.515316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.314 [2024-07-14 01:20:02.524579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.314 [2024-07-14 01:20:02.525012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.314 [2024-07-14 01:20:02.525043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.314 [2024-07-14 01:20:02.525061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.314 [2024-07-14 01:20:02.525299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.314 [2024-07-14 01:20:02.525540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.314 [2024-07-14 01:20:02.525563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.314 [2024-07-14 01:20:02.525584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.314 [2024-07-14 01:20:02.529148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.314 [2024-07-14 01:20:02.538587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.314 [2024-07-14 01:20:02.539057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.314 [2024-07-14 01:20:02.539088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.314 [2024-07-14 01:20:02.539106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.314 [2024-07-14 01:20:02.539344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.314 [2024-07-14 01:20:02.539585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.315 [2024-07-14 01:20:02.539608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.315 [2024-07-14 01:20:02.539623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.315 [2024-07-14 01:20:02.543186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.315 [2024-07-14 01:20:02.552420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.315 [2024-07-14 01:20:02.552879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.315 [2024-07-14 01:20:02.552910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.315 [2024-07-14 01:20:02.552928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.315 [2024-07-14 01:20:02.553165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.315 [2024-07-14 01:20:02.553406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.315 [2024-07-14 01:20:02.553429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.315 [2024-07-14 01:20:02.553444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.315 [2024-07-14 01:20:02.557004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.315 [2024-07-14 01:20:02.566442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.315 [2024-07-14 01:20:02.566926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.315 [2024-07-14 01:20:02.566958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.315 [2024-07-14 01:20:02.566975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.315 [2024-07-14 01:20:02.567212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.315 [2024-07-14 01:20:02.567453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.315 [2024-07-14 01:20:02.567476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.315 [2024-07-14 01:20:02.567491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.315 [2024-07-14 01:20:02.571052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.315 [2024-07-14 01:20:02.580279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.315 [2024-07-14 01:20:02.580735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.315 [2024-07-14 01:20:02.580770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.315 [2024-07-14 01:20:02.580789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.315 [2024-07-14 01:20:02.581038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.315 [2024-07-14 01:20:02.581279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.315 [2024-07-14 01:20:02.581302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.315 [2024-07-14 01:20:02.581317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.315 [2024-07-14 01:20:02.584878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.315 [2024-07-14 01:20:02.594106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.315 [2024-07-14 01:20:02.594584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.315 [2024-07-14 01:20:02.594615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.315 [2024-07-14 01:20:02.594633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.315 [2024-07-14 01:20:02.594881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.315 [2024-07-14 01:20:02.595122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.315 [2024-07-14 01:20:02.595145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.315 [2024-07-14 01:20:02.595160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.315 [2024-07-14 01:20:02.598710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.315 [2024-07-14 01:20:02.607942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.315 [2024-07-14 01:20:02.608394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.315 [2024-07-14 01:20:02.608424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.315 [2024-07-14 01:20:02.608442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.315 [2024-07-14 01:20:02.608679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.315 [2024-07-14 01:20:02.608932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.315 [2024-07-14 01:20:02.608956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.315 [2024-07-14 01:20:02.608971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.315 [2024-07-14 01:20:02.612520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.315 [2024-07-14 01:20:02.621961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.315 [2024-07-14 01:20:02.622395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.315 [2024-07-14 01:20:02.622427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.315 [2024-07-14 01:20:02.622445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.315 [2024-07-14 01:20:02.622683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.315 [2024-07-14 01:20:02.622941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.315 [2024-07-14 01:20:02.622966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.315 [2024-07-14 01:20:02.622981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.315 [2024-07-14 01:20:02.626531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.315 [2024-07-14 01:20:02.635972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.315 [2024-07-14 01:20:02.636412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.315 [2024-07-14 01:20:02.636443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.315 [2024-07-14 01:20:02.636460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.315 [2024-07-14 01:20:02.636698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.315 [2024-07-14 01:20:02.636951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.315 [2024-07-14 01:20:02.636975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.315 [2024-07-14 01:20:02.636990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.315 [2024-07-14 01:20:02.640544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.315 [2024-07-14 01:20:02.649991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.315 [2024-07-14 01:20:02.650447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.315 [2024-07-14 01:20:02.650478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.315 [2024-07-14 01:20:02.650496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.315 [2024-07-14 01:20:02.650733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.315 [2024-07-14 01:20:02.650985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.315 [2024-07-14 01:20:02.651009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.315 [2024-07-14 01:20:02.651025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.315 [2024-07-14 01:20:02.654685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.315 [2024-07-14 01:20:02.663923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.315 [2024-07-14 01:20:02.664375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.315 [2024-07-14 01:20:02.664407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.315 [2024-07-14 01:20:02.664424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.315 [2024-07-14 01:20:02.664661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.315 [2024-07-14 01:20:02.664915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.315 [2024-07-14 01:20:02.664939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.315 [2024-07-14 01:20:02.664954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.315 [2024-07-14 01:20:02.668508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.315 [2024-07-14 01:20:02.677744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.315 [2024-07-14 01:20:02.678205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.315 [2024-07-14 01:20:02.678236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.315 [2024-07-14 01:20:02.678254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.315 [2024-07-14 01:20:02.678491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.315 [2024-07-14 01:20:02.678732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.315 [2024-07-14 01:20:02.678754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.315 [2024-07-14 01:20:02.678770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.315 [2024-07-14 01:20:02.682328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.315 [2024-07-14 01:20:02.691791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.315 [2024-07-14 01:20:02.692260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.315 [2024-07-14 01:20:02.692292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.315 [2024-07-14 01:20:02.692310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.316 [2024-07-14 01:20:02.692547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.316 [2024-07-14 01:20:02.692788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.316 [2024-07-14 01:20:02.692811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.316 [2024-07-14 01:20:02.692827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.316 [2024-07-14 01:20:02.696393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.316 [2024-07-14 01:20:02.705616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.316 [2024-07-14 01:20:02.706077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.316 [2024-07-14 01:20:02.706108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.316 [2024-07-14 01:20:02.706127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.316 [2024-07-14 01:20:02.706364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.316 [2024-07-14 01:20:02.706604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.316 [2024-07-14 01:20:02.706627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.316 [2024-07-14 01:20:02.706642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.316 [2024-07-14 01:20:02.710204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.316 [2024-07-14 01:20:02.719440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.316 [2024-07-14 01:20:02.719880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.316 [2024-07-14 01:20:02.719912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.316 [2024-07-14 01:20:02.719936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.316 [2024-07-14 01:20:02.720176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.316 [2024-07-14 01:20:02.720417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.316 [2024-07-14 01:20:02.720440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.316 [2024-07-14 01:20:02.720455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.316 [2024-07-14 01:20:02.724019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.643 [2024-07-14 01:20:02.733665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.643 [2024-07-14 01:20:02.734128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.643 [2024-07-14 01:20:02.734162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.643 [2024-07-14 01:20:02.734180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.643 [2024-07-14 01:20:02.734438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.643 [2024-07-14 01:20:02.734689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.643 [2024-07-14 01:20:02.734720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.643 [2024-07-14 01:20:02.734738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.643 [2024-07-14 01:20:02.738454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.643 [2024-07-14 01:20:02.747698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.643 [2024-07-14 01:20:02.748144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.643 [2024-07-14 01:20:02.748175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.643 [2024-07-14 01:20:02.748193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.643 [2024-07-14 01:20:02.748431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.643 [2024-07-14 01:20:02.748672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.643 [2024-07-14 01:20:02.748696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.643 [2024-07-14 01:20:02.748711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.643 [2024-07-14 01:20:02.752274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.643 [2024-07-14 01:20:02.761709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.643 [2024-07-14 01:20:02.762178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.643 [2024-07-14 01:20:02.762209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.643 [2024-07-14 01:20:02.762227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.643 [2024-07-14 01:20:02.762464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.643 [2024-07-14 01:20:02.762705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.643 [2024-07-14 01:20:02.762734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.643 [2024-07-14 01:20:02.762750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.643 [2024-07-14 01:20:02.766313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.643 [2024-07-14 01:20:02.775544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.643 [2024-07-14 01:20:02.776000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.643 [2024-07-14 01:20:02.776031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.643 [2024-07-14 01:20:02.776049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.643 [2024-07-14 01:20:02.776287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.643 [2024-07-14 01:20:02.776528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.643 [2024-07-14 01:20:02.776551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.643 [2024-07-14 01:20:02.776566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.643 [2024-07-14 01:20:02.780129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.643 [2024-07-14 01:20:02.789359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.643 [2024-07-14 01:20:02.789800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.643 [2024-07-14 01:20:02.789831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.643 [2024-07-14 01:20:02.789849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.643 [2024-07-14 01:20:02.790095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.643 [2024-07-14 01:20:02.790337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.643 [2024-07-14 01:20:02.790360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.643 [2024-07-14 01:20:02.790375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.643 [2024-07-14 01:20:02.793936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.643 [2024-07-14 01:20:02.803368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.643 [2024-07-14 01:20:02.803822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.643 [2024-07-14 01:20:02.803853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.643 [2024-07-14 01:20:02.803882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.643 [2024-07-14 01:20:02.804122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.643 [2024-07-14 01:20:02.804363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.643 [2024-07-14 01:20:02.804387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.643 [2024-07-14 01:20:02.804402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.643 [2024-07-14 01:20:02.807964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.643 [2024-07-14 01:20:02.817199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.643 [2024-07-14 01:20:02.817664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.643 [2024-07-14 01:20:02.817695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.643 [2024-07-14 01:20:02.817713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.643 [2024-07-14 01:20:02.817959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.643 [2024-07-14 01:20:02.818200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.643 [2024-07-14 01:20:02.818223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.643 [2024-07-14 01:20:02.818238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.643 [2024-07-14 01:20:02.821791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.643 [2024-07-14 01:20:02.831027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.643 [2024-07-14 01:20:02.831494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.643 [2024-07-14 01:20:02.831524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.643 [2024-07-14 01:20:02.831542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.643 [2024-07-14 01:20:02.831779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.643 [2024-07-14 01:20:02.832029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.643 [2024-07-14 01:20:02.832053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.643 [2024-07-14 01:20:02.832069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.643 [2024-07-14 01:20:02.835618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.643 [2024-07-14 01:20:02.844852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.643 [2024-07-14 01:20:02.845304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.643 [2024-07-14 01:20:02.845335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.643 [2024-07-14 01:20:02.845353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.643 [2024-07-14 01:20:02.845590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.643 [2024-07-14 01:20:02.845831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.643 [2024-07-14 01:20:02.845854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.643 [2024-07-14 01:20:02.845878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.643 [2024-07-14 01:20:02.849438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.643 [2024-07-14 01:20:02.858882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.643 [2024-07-14 01:20:02.859343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.643 [2024-07-14 01:20:02.859374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.643 [2024-07-14 01:20:02.859391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.643 [2024-07-14 01:20:02.859635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.643 [2024-07-14 01:20:02.859888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.643 [2024-07-14 01:20:02.859912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.643 [2024-07-14 01:20:02.859928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.643 [2024-07-14 01:20:02.863478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.643 [2024-07-14 01:20:02.872700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.643 [2024-07-14 01:20:02.873162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.643 [2024-07-14 01:20:02.873192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.643 [2024-07-14 01:20:02.873210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.643 [2024-07-14 01:20:02.873447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.643 [2024-07-14 01:20:02.873689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.643 [2024-07-14 01:20:02.873711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.643 [2024-07-14 01:20:02.873727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.643 [2024-07-14 01:20:02.877290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.643 [2024-07-14 01:20:02.886518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.643 [2024-07-14 01:20:02.886968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.643 [2024-07-14 01:20:02.886999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.643 [2024-07-14 01:20:02.887018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.643 [2024-07-14 01:20:02.887255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.643 [2024-07-14 01:20:02.887496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.643 [2024-07-14 01:20:02.887519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.643 [2024-07-14 01:20:02.887534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.643 [2024-07-14 01:20:02.891096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.643 [2024-07-14 01:20:02.900526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.643 [2024-07-14 01:20:02.900955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.643 [2024-07-14 01:20:02.900986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.643 [2024-07-14 01:20:02.901004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.643 [2024-07-14 01:20:02.901242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.643 [2024-07-14 01:20:02.901483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.643 [2024-07-14 01:20:02.901506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.643 [2024-07-14 01:20:02.901527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.643 [2024-07-14 01:20:02.905091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.643 [2024-07-14 01:20:02.914524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.643 [2024-07-14 01:20:02.914977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.643 [2024-07-14 01:20:02.915009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.643 [2024-07-14 01:20:02.915026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.644 [2024-07-14 01:20:02.915263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.644 [2024-07-14 01:20:02.915505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.644 [2024-07-14 01:20:02.915528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.644 [2024-07-14 01:20:02.915543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.644 [2024-07-14 01:20:02.919107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.644 [2024-07-14 01:20:02.928546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.644 [2024-07-14 01:20:02.929015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.644 [2024-07-14 01:20:02.929046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.644 [2024-07-14 01:20:02.929064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.644 [2024-07-14 01:20:02.929301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.644 [2024-07-14 01:20:02.929542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.644 [2024-07-14 01:20:02.929565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.644 [2024-07-14 01:20:02.929581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.644 [2024-07-14 01:20:02.933142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.644 [2024-07-14 01:20:02.942367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.644 [2024-07-14 01:20:02.942806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.644 [2024-07-14 01:20:02.942836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.644 [2024-07-14 01:20:02.942854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.644 [2024-07-14 01:20:02.943101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.644 [2024-07-14 01:20:02.943342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.644 [2024-07-14 01:20:02.943365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.644 [2024-07-14 01:20:02.943381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.644 [2024-07-14 01:20:02.946946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.644 [2024-07-14 01:20:02.956380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.644 [2024-07-14 01:20:02.956830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.644 [2024-07-14 01:20:02.956875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.644 [2024-07-14 01:20:02.956896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.644 [2024-07-14 01:20:02.957133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.644 [2024-07-14 01:20:02.957374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.644 [2024-07-14 01:20:02.957397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.644 [2024-07-14 01:20:02.957412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.644 [2024-07-14 01:20:02.960972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.644 [2024-07-14 01:20:02.970199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.644 [2024-07-14 01:20:02.970658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.644 [2024-07-14 01:20:02.970689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.644 [2024-07-14 01:20:02.970706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.644 [2024-07-14 01:20:02.970954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.644 [2024-07-14 01:20:02.971196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.644 [2024-07-14 01:20:02.971219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.644 [2024-07-14 01:20:02.971235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.644 [2024-07-14 01:20:02.974786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.644 [2024-07-14 01:20:02.984017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.644 [2024-07-14 01:20:02.984476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.644 [2024-07-14 01:20:02.984507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.644 [2024-07-14 01:20:02.984525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.644 [2024-07-14 01:20:02.984761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.644 [2024-07-14 01:20:02.985014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.644 [2024-07-14 01:20:02.985038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.644 [2024-07-14 01:20:02.985053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.644 [2024-07-14 01:20:02.988604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.644 [2024-07-14 01:20:02.997851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.644 [2024-07-14 01:20:02.998348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.644 [2024-07-14 01:20:02.998379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.644 [2024-07-14 01:20:02.998397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.644 [2024-07-14 01:20:02.998635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.644 [2024-07-14 01:20:02.998894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.644 [2024-07-14 01:20:02.998919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.644 [2024-07-14 01:20:02.998934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.902 [2024-07-14 01:20:03.002629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.902 [2024-07-14 01:20:03.011692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.902 [2024-07-14 01:20:03.012196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.902 [2024-07-14 01:20:03.012246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.902 [2024-07-14 01:20:03.012264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.902 [2024-07-14 01:20:03.012502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.902 [2024-07-14 01:20:03.012743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.902 [2024-07-14 01:20:03.012766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.902 [2024-07-14 01:20:03.012781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.902 [2024-07-14 01:20:03.016346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.902 [2024-07-14 01:20:03.025573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.902 [2024-07-14 01:20:03.026028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.902 [2024-07-14 01:20:03.026059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.902 [2024-07-14 01:20:03.026077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.902 [2024-07-14 01:20:03.026315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.902 [2024-07-14 01:20:03.026556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.902 [2024-07-14 01:20:03.026579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.902 [2024-07-14 01:20:03.026594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.902 [2024-07-14 01:20:03.030158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.902 [2024-07-14 01:20:03.039390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.902 [2024-07-14 01:20:03.039922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.902 [2024-07-14 01:20:03.039953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.902 [2024-07-14 01:20:03.039972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.902 [2024-07-14 01:20:03.040209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.902 [2024-07-14 01:20:03.040450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.902 [2024-07-14 01:20:03.040473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.902 [2024-07-14 01:20:03.040488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.902 [2024-07-14 01:20:03.044065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.902 [2024-07-14 01:20:03.053294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.902 [2024-07-14 01:20:03.053746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.902 [2024-07-14 01:20:03.053777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.902 [2024-07-14 01:20:03.053795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.902 [2024-07-14 01:20:03.054041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.902 [2024-07-14 01:20:03.054283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.902 [2024-07-14 01:20:03.054307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.902 [2024-07-14 01:20:03.054323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.902 [2024-07-14 01:20:03.057885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.902 [2024-07-14 01:20:03.067234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.902 [2024-07-14 01:20:03.067701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.902 [2024-07-14 01:20:03.067732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.902 [2024-07-14 01:20:03.067751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.902 [2024-07-14 01:20:03.067998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.902 [2024-07-14 01:20:03.068241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.902 [2024-07-14 01:20:03.068264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.902 [2024-07-14 01:20:03.068280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.902 [2024-07-14 01:20:03.071835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.902 [2024-07-14 01:20:03.081098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.902 [2024-07-14 01:20:03.081551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.902 [2024-07-14 01:20:03.081582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.902 [2024-07-14 01:20:03.081600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.902 [2024-07-14 01:20:03.081838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.902 [2024-07-14 01:20:03.082089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.902 [2024-07-14 01:20:03.082113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.902 [2024-07-14 01:20:03.082128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.902 [2024-07-14 01:20:03.085678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.902 [2024-07-14 01:20:03.095120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.902 [2024-07-14 01:20:03.095579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.902 [2024-07-14 01:20:03.095610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.902 [2024-07-14 01:20:03.095634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.902 [2024-07-14 01:20:03.095881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.902 [2024-07-14 01:20:03.096122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.902 [2024-07-14 01:20:03.096145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.902 [2024-07-14 01:20:03.096160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.902 [2024-07-14 01:20:03.099712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.902 [2024-07-14 01:20:03.108961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.902 [2024-07-14 01:20:03.109583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.902 [2024-07-14 01:20:03.109632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.902 [2024-07-14 01:20:03.109650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.902 [2024-07-14 01:20:03.109895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.902 [2024-07-14 01:20:03.110137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.902 [2024-07-14 01:20:03.110160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.902 [2024-07-14 01:20:03.110175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.902 [2024-07-14 01:20:03.113745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.902 [2024-07-14 01:20:03.122769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.902 [2024-07-14 01:20:03.123213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.902 [2024-07-14 01:20:03.123244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.902 [2024-07-14 01:20:03.123262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.902 [2024-07-14 01:20:03.123499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.902 [2024-07-14 01:20:03.123741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.902 [2024-07-14 01:20:03.123764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.902 [2024-07-14 01:20:03.123779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.902 [2024-07-14 01:20:03.127345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.902 [2024-07-14 01:20:03.136788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.902 [2024-07-14 01:20:03.137254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.902 [2024-07-14 01:20:03.137285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.902 [2024-07-14 01:20:03.137304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.902 [2024-07-14 01:20:03.137542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.902 [2024-07-14 01:20:03.137783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.902 [2024-07-14 01:20:03.137815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.902 [2024-07-14 01:20:03.137831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.902 [2024-07-14 01:20:03.141405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.902 [2024-07-14 01:20:03.150658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.902 [2024-07-14 01:20:03.151119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.902 [2024-07-14 01:20:03.151151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.902 [2024-07-14 01:20:03.151170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.902 [2024-07-14 01:20:03.151408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.902 [2024-07-14 01:20:03.151649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.902 [2024-07-14 01:20:03.151672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.902 [2024-07-14 01:20:03.151687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.902 [2024-07-14 01:20:03.155249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.902 [2024-07-14 01:20:03.164482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.902 [2024-07-14 01:20:03.164949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.902 [2024-07-14 01:20:03.164980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.902 [2024-07-14 01:20:03.164998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.902 [2024-07-14 01:20:03.165236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.902 [2024-07-14 01:20:03.165484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.902 [2024-07-14 01:20:03.165507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.902 [2024-07-14 01:20:03.165522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.902 [2024-07-14 01:20:03.169099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.902 [2024-07-14 01:20:03.178338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.902 [2024-07-14 01:20:03.178797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.902 [2024-07-14 01:20:03.178828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.902 [2024-07-14 01:20:03.178845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.902 [2024-07-14 01:20:03.179091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.902 [2024-07-14 01:20:03.179333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.902 [2024-07-14 01:20:03.179357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.902 [2024-07-14 01:20:03.179372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.902 [2024-07-14 01:20:03.182945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.902 [2024-07-14 01:20:03.192173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.902 [2024-07-14 01:20:03.192759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.902 [2024-07-14 01:20:03.192813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.902 [2024-07-14 01:20:03.192831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.902 [2024-07-14 01:20:03.193078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.902 [2024-07-14 01:20:03.193320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.902 [2024-07-14 01:20:03.193343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.902 [2024-07-14 01:20:03.193360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.902 [2024-07-14 01:20:03.196921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.902 [2024-07-14 01:20:03.206052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.903 [2024-07-14 01:20:03.206578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.903 [2024-07-14 01:20:03.206627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.903 [2024-07-14 01:20:03.206645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.903 [2024-07-14 01:20:03.206892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.903 [2024-07-14 01:20:03.207134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.903 [2024-07-14 01:20:03.207157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.903 [2024-07-14 01:20:03.207172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.903 [2024-07-14 01:20:03.210725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.903 [2024-07-14 01:20:03.219964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.903 [2024-07-14 01:20:03.220417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.903 [2024-07-14 01:20:03.220448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.903 [2024-07-14 01:20:03.220466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.903 [2024-07-14 01:20:03.220703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.903 [2024-07-14 01:20:03.220956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.903 [2024-07-14 01:20:03.220980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.903 [2024-07-14 01:20:03.220996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.903 [2024-07-14 01:20:03.224548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.903 [2024-07-14 01:20:03.233800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.903 [2024-07-14 01:20:03.234263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.903 [2024-07-14 01:20:03.234294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.903 [2024-07-14 01:20:03.234312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.903 [2024-07-14 01:20:03.234555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.903 [2024-07-14 01:20:03.234797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.903 [2024-07-14 01:20:03.234820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.903 [2024-07-14 01:20:03.234835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.903 [2024-07-14 01:20:03.238403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.903 [2024-07-14 01:20:03.247639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.903 [2024-07-14 01:20:03.248082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.903 [2024-07-14 01:20:03.248113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.903 [2024-07-14 01:20:03.248131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.903 [2024-07-14 01:20:03.248368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.903 [2024-07-14 01:20:03.248609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.903 [2024-07-14 01:20:03.248633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.903 [2024-07-14 01:20:03.248648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.903 [2024-07-14 01:20:03.252208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.903 [2024-07-14 01:20:03.261481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.903 [2024-07-14 01:20:03.261949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.903 [2024-07-14 01:20:03.261980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.903 [2024-07-14 01:20:03.261998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.903 [2024-07-14 01:20:03.262236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.903 [2024-07-14 01:20:03.262476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.903 [2024-07-14 01:20:03.262500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.903 [2024-07-14 01:20:03.262515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.903 [2024-07-14 01:20:03.266088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.903 [2024-07-14 01:20:03.275328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.903 [2024-07-14 01:20:03.275786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.903 [2024-07-14 01:20:03.275817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.903 [2024-07-14 01:20:03.275835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.903 [2024-07-14 01:20:03.276081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.903 [2024-07-14 01:20:03.276322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.903 [2024-07-14 01:20:03.276345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.903 [2024-07-14 01:20:03.276366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.903 [2024-07-14 01:20:03.279928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.903 [2024-07-14 01:20:03.289172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.903 [2024-07-14 01:20:03.289602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.903 [2024-07-14 01:20:03.289633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.903 [2024-07-14 01:20:03.289650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.903 [2024-07-14 01:20:03.289898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.903 [2024-07-14 01:20:03.290139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.903 [2024-07-14 01:20:03.290173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.903 [2024-07-14 01:20:03.290188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.903 [2024-07-14 01:20:03.293746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.903 [2024-07-14 01:20:03.303006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:13.903 [2024-07-14 01:20:03.303446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.903 [2024-07-14 01:20:03.303477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:13.903 [2024-07-14 01:20:03.303494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:13.903 [2024-07-14 01:20:03.303731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:13.903 [2024-07-14 01:20:03.303981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.903 [2024-07-14 01:20:03.304005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.903 [2024-07-14 01:20:03.304021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.903 [2024-07-14 01:20:03.307587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.162 [2024-07-14 01:20:03.317109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.162 [2024-07-14 01:20:03.317544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-14 01:20:03.317576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.162 [2024-07-14 01:20:03.317594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.162 [2024-07-14 01:20:03.317832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.162 [2024-07-14 01:20:03.318093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.162 [2024-07-14 01:20:03.318129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.162 [2024-07-14 01:20:03.318156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.162 [2024-07-14 01:20:03.321789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.162 [2024-07-14 01:20:03.331035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.162 [2024-07-14 01:20:03.331535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-14 01:20:03.331572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.162 [2024-07-14 01:20:03.331590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.162 [2024-07-14 01:20:03.331828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.162 [2024-07-14 01:20:03.332085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.162 [2024-07-14 01:20:03.332109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.162 [2024-07-14 01:20:03.332124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.162 [2024-07-14 01:20:03.335681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.162 [2024-07-14 01:20:03.344951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.162 [2024-07-14 01:20:03.345419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-14 01:20:03.345450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.162 [2024-07-14 01:20:03.345468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.162 [2024-07-14 01:20:03.345705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.162 [2024-07-14 01:20:03.345956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.162 [2024-07-14 01:20:03.345980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.162 [2024-07-14 01:20:03.345995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.162 [2024-07-14 01:20:03.349551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.162 [2024-07-14 01:20:03.358787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.162 [2024-07-14 01:20:03.359299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.162 [2024-07-14 01:20:03.359348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.162 [2024-07-14 01:20:03.359366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.162 [2024-07-14 01:20:03.359603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.162 [2024-07-14 01:20:03.359844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.163 [2024-07-14 01:20:03.359875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.163 [2024-07-14 01:20:03.359893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.163 [2024-07-14 01:20:03.363446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.163 [2024-07-14 01:20:03.372685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.163 [2024-07-14 01:20:03.373160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-14 01:20:03.373191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.163 [2024-07-14 01:20:03.373209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.163 [2024-07-14 01:20:03.373446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.163 [2024-07-14 01:20:03.373693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.163 [2024-07-14 01:20:03.373716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.163 [2024-07-14 01:20:03.373731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.163 [2024-07-14 01:20:03.377291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.163 [2024-07-14 01:20:03.386516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.163 [2024-07-14 01:20:03.386979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-14 01:20:03.387010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.163 [2024-07-14 01:20:03.387028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.163 [2024-07-14 01:20:03.387265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.163 [2024-07-14 01:20:03.387506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.163 [2024-07-14 01:20:03.387529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.163 [2024-07-14 01:20:03.387544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.163 [2024-07-14 01:20:03.391107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.163 [2024-07-14 01:20:03.400333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.163 [2024-07-14 01:20:03.400785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-14 01:20:03.400816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.163 [2024-07-14 01:20:03.400834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.163 [2024-07-14 01:20:03.401081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.163 [2024-07-14 01:20:03.401323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.163 [2024-07-14 01:20:03.401346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.163 [2024-07-14 01:20:03.401361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.163 [2024-07-14 01:20:03.404919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.163 [2024-07-14 01:20:03.414146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.163 [2024-07-14 01:20:03.414594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-14 01:20:03.414625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.163 [2024-07-14 01:20:03.414643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.163 [2024-07-14 01:20:03.414891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.163 [2024-07-14 01:20:03.415133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.163 [2024-07-14 01:20:03.415156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.163 [2024-07-14 01:20:03.415172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.163 [2024-07-14 01:20:03.418748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.163 [2024-07-14 01:20:03.427990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.163 [2024-07-14 01:20:03.428450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-14 01:20:03.428480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.163 [2024-07-14 01:20:03.428498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.163 [2024-07-14 01:20:03.428735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.163 [2024-07-14 01:20:03.428985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.163 [2024-07-14 01:20:03.429009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.163 [2024-07-14 01:20:03.429024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.163 [2024-07-14 01:20:03.432710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.163 [2024-07-14 01:20:03.441943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.163 [2024-07-14 01:20:03.442370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-14 01:20:03.442401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.163 [2024-07-14 01:20:03.442420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.163 [2024-07-14 01:20:03.442658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.163 [2024-07-14 01:20:03.442910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.163 [2024-07-14 01:20:03.442934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.163 [2024-07-14 01:20:03.442949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.163 [2024-07-14 01:20:03.446507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.163 [2024-07-14 01:20:03.455951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.163 [2024-07-14 01:20:03.456385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-14 01:20:03.456416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.163 [2024-07-14 01:20:03.456434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.163 [2024-07-14 01:20:03.456672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.163 [2024-07-14 01:20:03.456933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.163 [2024-07-14 01:20:03.456957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.163 [2024-07-14 01:20:03.456972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.163 [2024-07-14 01:20:03.460525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.163 [2024-07-14 01:20:03.469963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.163 [2024-07-14 01:20:03.470406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-14 01:20:03.470438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.163 [2024-07-14 01:20:03.470461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.163 [2024-07-14 01:20:03.470699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.163 [2024-07-14 01:20:03.470952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.163 [2024-07-14 01:20:03.470976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.163 [2024-07-14 01:20:03.470991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.163 [2024-07-14 01:20:03.474543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.163 [2024-07-14 01:20:03.484013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.163 [2024-07-14 01:20:03.484445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-14 01:20:03.484476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.163 [2024-07-14 01:20:03.484494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.163 [2024-07-14 01:20:03.484731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.163 [2024-07-14 01:20:03.484984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.163 [2024-07-14 01:20:03.485008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.163 [2024-07-14 01:20:03.485024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.163 [2024-07-14 01:20:03.488578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.163 [2024-07-14 01:20:03.498022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.163 [2024-07-14 01:20:03.498473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-14 01:20:03.498504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.163 [2024-07-14 01:20:03.498521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.163 [2024-07-14 01:20:03.498758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.163 [2024-07-14 01:20:03.499010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.163 [2024-07-14 01:20:03.499034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.163 [2024-07-14 01:20:03.499049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.163 [2024-07-14 01:20:03.502601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.163 [2024-07-14 01:20:03.511841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.163 [2024-07-14 01:20:03.512278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-14 01:20:03.512309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.163 [2024-07-14 01:20:03.512327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.163 [2024-07-14 01:20:03.512565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.163 [2024-07-14 01:20:03.512805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.163 [2024-07-14 01:20:03.512834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.163 [2024-07-14 01:20:03.512850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.163 [2024-07-14 01:20:03.516416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.163 [2024-07-14 01:20:03.525873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.163 [2024-07-14 01:20:03.526324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-14 01:20:03.526355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.163 [2024-07-14 01:20:03.526373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.163 [2024-07-14 01:20:03.526610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.163 [2024-07-14 01:20:03.526851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.163 [2024-07-14 01:20:03.526884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.163 [2024-07-14 01:20:03.526901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.163 [2024-07-14 01:20:03.530454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.163 [2024-07-14 01:20:03.539689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.163 [2024-07-14 01:20:03.540160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-14 01:20:03.540191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.163 [2024-07-14 01:20:03.540208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.163 [2024-07-14 01:20:03.540446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.163 [2024-07-14 01:20:03.540686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.163 [2024-07-14 01:20:03.540709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.163 [2024-07-14 01:20:03.540724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.163 [2024-07-14 01:20:03.544295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.163 [2024-07-14 01:20:03.553526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.163 [2024-07-14 01:20:03.553965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-14 01:20:03.553997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.163 [2024-07-14 01:20:03.554015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.163 [2024-07-14 01:20:03.554253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.163 [2024-07-14 01:20:03.554494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.163 [2024-07-14 01:20:03.554517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.163 [2024-07-14 01:20:03.554533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.163 [2024-07-14 01:20:03.558097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.163 [2024-07-14 01:20:03.567532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.163 [2024-07-14 01:20:03.567966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.163 [2024-07-14 01:20:03.567997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.163 [2024-07-14 01:20:03.568016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.163 [2024-07-14 01:20:03.568254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.163 [2024-07-14 01:20:03.568495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.163 [2024-07-14 01:20:03.568518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.163 [2024-07-14 01:20:03.568533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.163 [2024-07-14 01:20:03.572180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.423 [2024-07-14 01:20:03.581489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.423 [2024-07-14 01:20:03.581968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.423 [2024-07-14 01:20:03.582000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.423 [2024-07-14 01:20:03.582018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.423 [2024-07-14 01:20:03.582256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.423 [2024-07-14 01:20:03.582497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.423 [2024-07-14 01:20:03.582520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.423 [2024-07-14 01:20:03.582536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.423 [2024-07-14 01:20:03.586103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.423 [2024-07-14 01:20:03.595335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.423 [2024-07-14 01:20:03.595789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.423 [2024-07-14 01:20:03.595820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.423 [2024-07-14 01:20:03.595838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.423 [2024-07-14 01:20:03.596086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.423 [2024-07-14 01:20:03.596327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.423 [2024-07-14 01:20:03.596350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.423 [2024-07-14 01:20:03.596366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.423 [2024-07-14 01:20:03.599925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.423 [2024-07-14 01:20:03.609156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.423 [2024-07-14 01:20:03.609615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.423 [2024-07-14 01:20:03.609645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.423 [2024-07-14 01:20:03.609663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.423 [2024-07-14 01:20:03.609919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.423 [2024-07-14 01:20:03.610160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.423 [2024-07-14 01:20:03.610183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.423 [2024-07-14 01:20:03.610199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.423 [2024-07-14 01:20:03.613751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.423 [2024-07-14 01:20:03.622997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.423 [2024-07-14 01:20:03.623460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.423 [2024-07-14 01:20:03.623491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.423 [2024-07-14 01:20:03.623509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.423 [2024-07-14 01:20:03.623747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.423 [2024-07-14 01:20:03.623999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.423 [2024-07-14 01:20:03.624023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.423 [2024-07-14 01:20:03.624038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.423 [2024-07-14 01:20:03.627591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.423 [2024-07-14 01:20:03.636824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.423 [2024-07-14 01:20:03.637291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.423 [2024-07-14 01:20:03.637322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.423 [2024-07-14 01:20:03.637340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.423 [2024-07-14 01:20:03.637577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.423 [2024-07-14 01:20:03.637817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.423 [2024-07-14 01:20:03.637840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.423 [2024-07-14 01:20:03.637855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.423 [2024-07-14 01:20:03.641419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.423 [2024-07-14 01:20:03.650655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.423 [2024-07-14 01:20:03.651096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.423 [2024-07-14 01:20:03.651128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.423 [2024-07-14 01:20:03.651146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.423 [2024-07-14 01:20:03.651384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.423 [2024-07-14 01:20:03.651625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.423 [2024-07-14 01:20:03.651648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.423 [2024-07-14 01:20:03.651669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.423 [2024-07-14 01:20:03.655235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.423 [2024-07-14 01:20:03.664675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.423 [2024-07-14 01:20:03.665137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.423 [2024-07-14 01:20:03.665168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.423 [2024-07-14 01:20:03.665186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.423 [2024-07-14 01:20:03.665423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.423 [2024-07-14 01:20:03.665664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.423 [2024-07-14 01:20:03.665687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.424 [2024-07-14 01:20:03.665702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.424 [2024-07-14 01:20:03.669278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.424 [2024-07-14 01:20:03.678631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.424 [2024-07-14 01:20:03.679100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.424 [2024-07-14 01:20:03.679133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.424 [2024-07-14 01:20:03.679151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.424 [2024-07-14 01:20:03.679389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.424 [2024-07-14 01:20:03.679630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.424 [2024-07-14 01:20:03.679653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.424 [2024-07-14 01:20:03.679669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.424 [2024-07-14 01:20:03.683235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.424 [2024-07-14 01:20:03.692473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.424 [2024-07-14 01:20:03.692912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.424 [2024-07-14 01:20:03.692944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.424 [2024-07-14 01:20:03.692962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.424 [2024-07-14 01:20:03.693200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.424 [2024-07-14 01:20:03.693443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.424 [2024-07-14 01:20:03.693466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.424 [2024-07-14 01:20:03.693481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.424 [2024-07-14 01:20:03.697050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.424 [2024-07-14 01:20:03.706295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.424 [2024-07-14 01:20:03.706729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.424 [2024-07-14 01:20:03.706765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.424 [2024-07-14 01:20:03.706784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.424 [2024-07-14 01:20:03.707033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.424 [2024-07-14 01:20:03.707276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.424 [2024-07-14 01:20:03.707299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.424 [2024-07-14 01:20:03.707314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.424 [2024-07-14 01:20:03.710875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.424 [2024-07-14 01:20:03.720111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.424 [2024-07-14 01:20:03.720575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.424 [2024-07-14 01:20:03.720606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.424 [2024-07-14 01:20:03.720624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.424 [2024-07-14 01:20:03.720861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.424 [2024-07-14 01:20:03.721115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.424 [2024-07-14 01:20:03.721138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.424 [2024-07-14 01:20:03.721154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.424 [2024-07-14 01:20:03.724708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.424 [2024-07-14 01:20:03.733951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.424 [2024-07-14 01:20:03.734408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.424 [2024-07-14 01:20:03.734438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.424 [2024-07-14 01:20:03.734456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.424 [2024-07-14 01:20:03.734693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.424 [2024-07-14 01:20:03.734947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.424 [2024-07-14 01:20:03.734971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.424 [2024-07-14 01:20:03.734986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.424 [2024-07-14 01:20:03.738539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.424 [2024-07-14 01:20:03.747777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.424 [2024-07-14 01:20:03.748241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.424 [2024-07-14 01:20:03.748271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.424 [2024-07-14 01:20:03.748290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.424 [2024-07-14 01:20:03.748527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.424 [2024-07-14 01:20:03.748774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.424 [2024-07-14 01:20:03.748797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.424 [2024-07-14 01:20:03.748812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.424 [2024-07-14 01:20:03.752379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.424 [2024-07-14 01:20:03.761614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.424 [2024-07-14 01:20:03.762081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.424 [2024-07-14 01:20:03.762112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.424 [2024-07-14 01:20:03.762130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.424 [2024-07-14 01:20:03.762367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.424 [2024-07-14 01:20:03.762609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.424 [2024-07-14 01:20:03.762631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.424 [2024-07-14 01:20:03.762646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.424 [2024-07-14 01:20:03.766216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.424 [2024-07-14 01:20:03.775453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.424 [2024-07-14 01:20:03.775916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.424 [2024-07-14 01:20:03.775948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.424 [2024-07-14 01:20:03.775966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.424 [2024-07-14 01:20:03.776203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.424 [2024-07-14 01:20:03.776445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.424 [2024-07-14 01:20:03.776468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.424 [2024-07-14 01:20:03.776483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.424 [2024-07-14 01:20:03.780047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.424 [2024-07-14 01:20:03.789273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.424 [2024-07-14 01:20:03.789707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.424 [2024-07-14 01:20:03.789737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.424 [2024-07-14 01:20:03.789755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.424 [2024-07-14 01:20:03.790004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.424 [2024-07-14 01:20:03.790245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.424 [2024-07-14 01:20:03.790268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.424 [2024-07-14 01:20:03.790284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.424 [2024-07-14 01:20:03.793843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.424 [2024-07-14 01:20:03.803299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.424 [2024-07-14 01:20:03.803725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.424 [2024-07-14 01:20:03.803756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.424 [2024-07-14 01:20:03.803774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.424 [2024-07-14 01:20:03.804023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.424 [2024-07-14 01:20:03.804265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.424 [2024-07-14 01:20:03.804288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.424 [2024-07-14 01:20:03.804303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.424 [2024-07-14 01:20:03.807856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.424 [2024-07-14 01:20:03.817309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.424 [2024-07-14 01:20:03.817775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.424 [2024-07-14 01:20:03.817805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.424 [2024-07-14 01:20:03.817823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.424 [2024-07-14 01:20:03.818069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.424 [2024-07-14 01:20:03.818311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.424 [2024-07-14 01:20:03.818333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.424 [2024-07-14 01:20:03.818348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.424 [2024-07-14 01:20:03.821928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.424 [2024-07-14 01:20:03.831183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.424 [2024-07-14 01:20:03.831682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.424 [2024-07-14 01:20:03.831714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.424 [2024-07-14 01:20:03.831732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.424 [2024-07-14 01:20:03.831981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.424 [2024-07-14 01:20:03.832224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.424 [2024-07-14 01:20:03.832247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.424 [2024-07-14 01:20:03.832263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.684 [2024-07-14 01:20:03.835972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.684 [2024-07-14 01:20:03.845173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.684 [2024-07-14 01:20:03.845631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.684 [2024-07-14 01:20:03.845663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.684 [2024-07-14 01:20:03.845691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.684 [2024-07-14 01:20:03.845941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.684 [2024-07-14 01:20:03.846183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:03.846206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:03.846221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.685 [2024-07-14 01:20:03.849781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.685 [2024-07-14 01:20:03.859032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.685 [2024-07-14 01:20:03.859498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.685 [2024-07-14 01:20:03.859529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.685 [2024-07-14 01:20:03.859547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.685 [2024-07-14 01:20:03.859784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.685 [2024-07-14 01:20:03.860037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:03.860061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:03.860077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.685 [2024-07-14 01:20:03.863633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.685 [2024-07-14 01:20:03.872875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.685 [2024-07-14 01:20:03.873303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.685 [2024-07-14 01:20:03.873334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.685 [2024-07-14 01:20:03.873352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.685 [2024-07-14 01:20:03.873589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.685 [2024-07-14 01:20:03.873830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:03.873853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:03.873878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.685 [2024-07-14 01:20:03.877438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.685 [2024-07-14 01:20:03.886894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.685 [2024-07-14 01:20:03.887362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.685 [2024-07-14 01:20:03.887393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.685 [2024-07-14 01:20:03.887411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.685 [2024-07-14 01:20:03.887648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.685 [2024-07-14 01:20:03.887901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:03.887931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:03.887947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.685 [2024-07-14 01:20:03.891497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.685 [2024-07-14 01:20:03.900726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.685 [2024-07-14 01:20:03.901164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.685 [2024-07-14 01:20:03.901195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.685 [2024-07-14 01:20:03.901213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.685 [2024-07-14 01:20:03.901450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.685 [2024-07-14 01:20:03.901691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:03.901714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:03.901730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.685 [2024-07-14 01:20:03.905297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.685 [2024-07-14 01:20:03.914737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.685 [2024-07-14 01:20:03.915180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.685 [2024-07-14 01:20:03.915210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.685 [2024-07-14 01:20:03.915228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.685 [2024-07-14 01:20:03.915465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.685 [2024-07-14 01:20:03.915706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:03.915729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:03.915745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.685 [2024-07-14 01:20:03.919315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.685 [2024-07-14 01:20:03.928566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.685 [2024-07-14 01:20:03.929018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.685 [2024-07-14 01:20:03.929049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.685 [2024-07-14 01:20:03.929067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.685 [2024-07-14 01:20:03.929304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.685 [2024-07-14 01:20:03.929545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:03.929568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:03.929583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.685 [2024-07-14 01:20:03.933144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.685 [2024-07-14 01:20:03.942577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.685 [2024-07-14 01:20:03.943032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.685 [2024-07-14 01:20:03.943063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.685 [2024-07-14 01:20:03.943081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.685 [2024-07-14 01:20:03.943319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.685 [2024-07-14 01:20:03.943560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:03.943583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:03.943598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.685 [2024-07-14 01:20:03.947168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.685 [2024-07-14 01:20:03.956397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.685 [2024-07-14 01:20:03.956850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.685 [2024-07-14 01:20:03.956888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.685 [2024-07-14 01:20:03.956907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.685 [2024-07-14 01:20:03.957145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.685 [2024-07-14 01:20:03.957386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:03.957409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:03.957424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.685 [2024-07-14 01:20:03.960985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.685 [2024-07-14 01:20:03.970213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.685 [2024-07-14 01:20:03.970641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.685 [2024-07-14 01:20:03.970671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.685 [2024-07-14 01:20:03.970689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.685 [2024-07-14 01:20:03.970938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.685 [2024-07-14 01:20:03.971179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:03.971202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:03.971218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.685 [2024-07-14 01:20:03.974771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.685 [2024-07-14 01:20:03.984218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.685 [2024-07-14 01:20:03.984649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.685 [2024-07-14 01:20:03.984680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.685 [2024-07-14 01:20:03.984698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.685 [2024-07-14 01:20:03.984955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.685 [2024-07-14 01:20:03.985197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:03.985220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:03.985235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.685 [2024-07-14 01:20:03.988785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.685 [2024-07-14 01:20:03.998231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.685 [2024-07-14 01:20:03.998686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.685 [2024-07-14 01:20:03.998717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.685 [2024-07-14 01:20:03.998735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.685 [2024-07-14 01:20:03.998984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.685 [2024-07-14 01:20:03.999226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:03.999249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:03.999264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.685 [2024-07-14 01:20:04.002817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.685 [2024-07-14 01:20:04.012066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.685 [2024-07-14 01:20:04.012517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.685 [2024-07-14 01:20:04.012548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.685 [2024-07-14 01:20:04.012566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.685 [2024-07-14 01:20:04.012803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.685 [2024-07-14 01:20:04.013056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:04.013079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:04.013095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.685 [2024-07-14 01:20:04.016648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.685 [2024-07-14 01:20:04.025891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.685 [2024-07-14 01:20:04.026341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.685 [2024-07-14 01:20:04.026373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.685 [2024-07-14 01:20:04.026390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.685 [2024-07-14 01:20:04.026627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.685 [2024-07-14 01:20:04.026879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:04.026903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:04.026924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.685 [2024-07-14 01:20:04.030478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.685 [2024-07-14 01:20:04.039707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.685 [2024-07-14 01:20:04.040124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.685 [2024-07-14 01:20:04.040155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.685 [2024-07-14 01:20:04.040173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.685 [2024-07-14 01:20:04.040410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.685 [2024-07-14 01:20:04.040651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:04.040674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:04.040690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.685 [2024-07-14 01:20:04.044259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.685 [2024-07-14 01:20:04.053696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.685 [2024-07-14 01:20:04.054164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.685 [2024-07-14 01:20:04.054195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.685 [2024-07-14 01:20:04.054213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.685 [2024-07-14 01:20:04.054450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.685 [2024-07-14 01:20:04.054692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:04.054715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:04.054730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.685 [2024-07-14 01:20:04.058299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.685 [2024-07-14 01:20:04.067562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.685 [2024-07-14 01:20:04.068000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.685 [2024-07-14 01:20:04.068031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.685 [2024-07-14 01:20:04.068049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.685 [2024-07-14 01:20:04.068287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.685 [2024-07-14 01:20:04.068528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:04.068552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:04.068567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.685 [2024-07-14 01:20:04.072134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.685 [2024-07-14 01:20:04.081572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.685 [2024-07-14 01:20:04.082024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.685 [2024-07-14 01:20:04.082061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.685 [2024-07-14 01:20:04.082079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.685 [2024-07-14 01:20:04.082316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.685 [2024-07-14 01:20:04.082558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:04.082581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:04.082596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.685 [2024-07-14 01:20:04.086161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.685 [2024-07-14 01:20:04.095528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.685 [2024-07-14 01:20:04.096006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.685 [2024-07-14 01:20:04.096038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.685 [2024-07-14 01:20:04.096057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.685 [2024-07-14 01:20:04.096294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.685 [2024-07-14 01:20:04.096556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.685 [2024-07-14 01:20:04.096591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.685 [2024-07-14 01:20:04.096617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.945 [2024-07-14 01:20:04.100246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.945 [2024-07-14 01:20:04.109386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.945 [2024-07-14 01:20:04.109823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.945 [2024-07-14 01:20:04.109855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.945 [2024-07-14 01:20:04.109885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.945 [2024-07-14 01:20:04.110125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.945 [2024-07-14 01:20:04.110366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.945 [2024-07-14 01:20:04.110389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.945 [2024-07-14 01:20:04.110404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.945 [2024-07-14 01:20:04.113967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.945 [2024-07-14 01:20:04.123204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.945 [2024-07-14 01:20:04.123661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.945 [2024-07-14 01:20:04.123692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.945 [2024-07-14 01:20:04.123710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.945 [2024-07-14 01:20:04.123960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.945 [2024-07-14 01:20:04.124208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.945 [2024-07-14 01:20:04.124231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.945 [2024-07-14 01:20:04.124246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.945 [2024-07-14 01:20:04.127802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.945 [2024-07-14 01:20:04.137043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.945 [2024-07-14 01:20:04.137468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.945 [2024-07-14 01:20:04.137499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.945 [2024-07-14 01:20:04.137517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.945 [2024-07-14 01:20:04.137754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.945 [2024-07-14 01:20:04.138008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.945 [2024-07-14 01:20:04.138032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.945 [2024-07-14 01:20:04.138047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.945 [2024-07-14 01:20:04.141602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.945 [2024-07-14 01:20:04.151060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.945 [2024-07-14 01:20:04.151525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.945 [2024-07-14 01:20:04.151557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.945 [2024-07-14 01:20:04.151574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.945 [2024-07-14 01:20:04.151811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.945 [2024-07-14 01:20:04.152064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.945 [2024-07-14 01:20:04.152088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.945 [2024-07-14 01:20:04.152104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.945 [2024-07-14 01:20:04.155659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.945 [2024-07-14 01:20:04.164896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.945 [2024-07-14 01:20:04.165349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.945 [2024-07-14 01:20:04.165379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.945 [2024-07-14 01:20:04.165397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.945 [2024-07-14 01:20:04.165634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.945 [2024-07-14 01:20:04.165888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.945 [2024-07-14 01:20:04.165912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.945 [2024-07-14 01:20:04.165928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.945 [2024-07-14 01:20:04.169490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.945 [2024-07-14 01:20:04.178736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.945 [2024-07-14 01:20:04.179178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.945 [2024-07-14 01:20:04.179209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.945 [2024-07-14 01:20:04.179227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.945 [2024-07-14 01:20:04.179466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.945 [2024-07-14 01:20:04.179707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.945 [2024-07-14 01:20:04.179730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.945 [2024-07-14 01:20:04.179745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.945 [2024-07-14 01:20:04.183308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.945 [2024-07-14 01:20:04.192752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.945 [2024-07-14 01:20:04.193187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.945 [2024-07-14 01:20:04.193218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.945 [2024-07-14 01:20:04.193236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.945 [2024-07-14 01:20:04.193473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.945 [2024-07-14 01:20:04.193714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.945 [2024-07-14 01:20:04.193737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.945 [2024-07-14 01:20:04.193752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.945 [2024-07-14 01:20:04.197320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.945 [2024-07-14 01:20:04.206776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.945 [2024-07-14 01:20:04.207240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.945 [2024-07-14 01:20:04.207271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.945 [2024-07-14 01:20:04.207289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.945 [2024-07-14 01:20:04.207527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.945 [2024-07-14 01:20:04.207769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.945 [2024-07-14 01:20:04.207792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.945 [2024-07-14 01:20:04.207808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.945 [2024-07-14 01:20:04.211372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.945 [2024-07-14 01:20:04.220606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.945 [2024-07-14 01:20:04.221041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.945 [2024-07-14 01:20:04.221071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.945 [2024-07-14 01:20:04.221095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.945 [2024-07-14 01:20:04.221334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.945 [2024-07-14 01:20:04.221574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.945 [2024-07-14 01:20:04.221597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.945 [2024-07-14 01:20:04.221613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.945 [2024-07-14 01:20:04.225178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.945 [2024-07-14 01:20:04.234626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.945 [2024-07-14 01:20:04.235207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.945 [2024-07-14 01:20:04.235267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.945 [2024-07-14 01:20:04.235285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.945 [2024-07-14 01:20:04.235523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.945 [2024-07-14 01:20:04.235764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.945 [2024-07-14 01:20:04.235787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.945 [2024-07-14 01:20:04.235803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.945 [2024-07-14 01:20:04.239369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.945 [2024-07-14 01:20:04.248624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.945 [2024-07-14 01:20:04.249092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.945 [2024-07-14 01:20:04.249123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.945 [2024-07-14 01:20:04.249141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.945 [2024-07-14 01:20:04.249378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.945 [2024-07-14 01:20:04.249618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.945 [2024-07-14 01:20:04.249641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.945 [2024-07-14 01:20:04.249656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.945 [2024-07-14 01:20:04.253236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.945 [2024-07-14 01:20:04.262475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.945 [2024-07-14 01:20:04.262909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.945 [2024-07-14 01:20:04.262941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.945 [2024-07-14 01:20:04.262958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.945 [2024-07-14 01:20:04.263196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.945 [2024-07-14 01:20:04.263437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.945 [2024-07-14 01:20:04.263465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.945 [2024-07-14 01:20:04.263481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.945 [2024-07-14 01:20:04.267048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.945 [2024-07-14 01:20:04.276319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.945 [2024-07-14 01:20:04.276760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.945 [2024-07-14 01:20:04.276791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.945 [2024-07-14 01:20:04.276808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.945 [2024-07-14 01:20:04.277056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.945 [2024-07-14 01:20:04.277298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.945 [2024-07-14 01:20:04.277321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.945 [2024-07-14 01:20:04.277336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.945 [2024-07-14 01:20:04.280895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.945 [2024-07-14 01:20:04.290150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.945 [2024-07-14 01:20:04.290590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.945 [2024-07-14 01:20:04.290622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.945 [2024-07-14 01:20:04.290640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.945 [2024-07-14 01:20:04.290889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.945 [2024-07-14 01:20:04.291141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.945 [2024-07-14 01:20:04.291171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.945 [2024-07-14 01:20:04.291187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.945 [2024-07-14 01:20:04.294739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.945 [2024-07-14 01:20:04.303981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.945 [2024-07-14 01:20:04.304408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.945 [2024-07-14 01:20:04.304439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.945 [2024-07-14 01:20:04.304458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.945 [2024-07-14 01:20:04.304696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.945 [2024-07-14 01:20:04.304946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.945 [2024-07-14 01:20:04.304970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.945 [2024-07-14 01:20:04.304986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.945 [2024-07-14 01:20:04.308535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.945 [2024-07-14 01:20:04.317979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.945 [2024-07-14 01:20:04.318492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.945 [2024-07-14 01:20:04.318523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.945 [2024-07-14 01:20:04.318541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.945 [2024-07-14 01:20:04.318778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.945 [2024-07-14 01:20:04.319029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.945 [2024-07-14 01:20:04.319053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.945 [2024-07-14 01:20:04.319069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.945 [2024-07-14 01:20:04.322620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.945 [2024-07-14 01:20:04.331852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.945 [2024-07-14 01:20:04.332289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.945 [2024-07-14 01:20:04.332320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.945 [2024-07-14 01:20:04.332338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.945 [2024-07-14 01:20:04.332575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.946 [2024-07-14 01:20:04.332816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.946 [2024-07-14 01:20:04.332840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.946 [2024-07-14 01:20:04.332855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.946 [2024-07-14 01:20:04.336423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:14.946 [2024-07-14 01:20:04.345905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:14.946 [2024-07-14 01:20:04.346360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:14.946 [2024-07-14 01:20:04.346391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:14.946 [2024-07-14 01:20:04.346410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:14.946 [2024-07-14 01:20:04.346647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:14.946 [2024-07-14 01:20:04.346902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:14.946 [2024-07-14 01:20:04.346925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:14.946 [2024-07-14 01:20:04.346941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:14.946 [2024-07-14 01:20:04.350505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.206 [2024-07-14 01:20:04.359979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.206 [2024-07-14 01:20:04.360566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.206 [2024-07-14 01:20:04.360624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.206 [2024-07-14 01:20:04.360643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.206 [2024-07-14 01:20:04.360928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.206 [2024-07-14 01:20:04.361172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.206 [2024-07-14 01:20:04.361195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.206 [2024-07-14 01:20:04.361212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.206 [2024-07-14 01:20:04.364839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.206 [2024-07-14 01:20:04.373886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.206 [2024-07-14 01:20:04.374383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.206 [2024-07-14 01:20:04.374432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.206 [2024-07-14 01:20:04.374450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.206 [2024-07-14 01:20:04.374688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.206 [2024-07-14 01:20:04.374938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.206 [2024-07-14 01:20:04.374961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.206 [2024-07-14 01:20:04.374977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.206 [2024-07-14 01:20:04.378534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.206 [2024-07-14 01:20:04.387825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.206 [2024-07-14 01:20:04.388306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.206 [2024-07-14 01:20:04.388338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.206 [2024-07-14 01:20:04.388356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.207 [2024-07-14 01:20:04.388593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.207 [2024-07-14 01:20:04.388835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.207 [2024-07-14 01:20:04.388859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.207 [2024-07-14 01:20:04.388884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.207 [2024-07-14 01:20:04.392441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.207 [2024-07-14 01:20:04.401675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.207 [2024-07-14 01:20:04.402089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.207 [2024-07-14 01:20:04.402120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.207 [2024-07-14 01:20:04.402138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.207 [2024-07-14 01:20:04.402375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.207 [2024-07-14 01:20:04.402616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.207 [2024-07-14 01:20:04.402639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.207 [2024-07-14 01:20:04.402660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.207 [2024-07-14 01:20:04.406224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.207 [2024-07-14 01:20:04.415664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.207 [2024-07-14 01:20:04.416079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.207 [2024-07-14 01:20:04.416110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.207 [2024-07-14 01:20:04.416128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.207 [2024-07-14 01:20:04.416365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.207 [2024-07-14 01:20:04.416606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.207 [2024-07-14 01:20:04.416629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.207 [2024-07-14 01:20:04.416644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.207 [2024-07-14 01:20:04.420209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.207 [2024-07-14 01:20:04.429658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.207 [2024-07-14 01:20:04.430097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.207 [2024-07-14 01:20:04.430128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.207 [2024-07-14 01:20:04.430146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.207 [2024-07-14 01:20:04.430384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.207 [2024-07-14 01:20:04.430625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.207 [2024-07-14 01:20:04.430647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.207 [2024-07-14 01:20:04.430663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.207 [2024-07-14 01:20:04.434227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.207 [2024-07-14 01:20:04.443671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.207 [2024-07-14 01:20:04.444099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.207 [2024-07-14 01:20:04.444130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.207 [2024-07-14 01:20:04.444148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.207 [2024-07-14 01:20:04.444385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.207 [2024-07-14 01:20:04.444626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.207 [2024-07-14 01:20:04.444649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.207 [2024-07-14 01:20:04.444664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.207 [2024-07-14 01:20:04.448227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.207 [2024-07-14 01:20:04.457609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.207 [2024-07-14 01:20:04.458038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.207 [2024-07-14 01:20:04.458075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.207 [2024-07-14 01:20:04.458093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.207 [2024-07-14 01:20:04.458331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.207 [2024-07-14 01:20:04.458572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.207 [2024-07-14 01:20:04.458598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.207 [2024-07-14 01:20:04.458613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.207 [2024-07-14 01:20:04.462178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.207 [2024-07-14 01:20:04.471627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.207 [2024-07-14 01:20:04.472088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.207 [2024-07-14 01:20:04.472120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.207 [2024-07-14 01:20:04.472137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.207 [2024-07-14 01:20:04.472374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.207 [2024-07-14 01:20:04.472616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.207 [2024-07-14 01:20:04.472639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.207 [2024-07-14 01:20:04.472654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.207 [2024-07-14 01:20:04.476218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.207 [2024-07-14 01:20:04.485480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.207 [2024-07-14 01:20:04.485970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.207 [2024-07-14 01:20:04.486001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.207 [2024-07-14 01:20:04.486019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.207 [2024-07-14 01:20:04.486256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.207 [2024-07-14 01:20:04.486496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.207 [2024-07-14 01:20:04.486519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.207 [2024-07-14 01:20:04.486535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.207 [2024-07-14 01:20:04.490107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.207 [2024-07-14 01:20:04.499338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.207 [2024-07-14 01:20:04.499789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.207 [2024-07-14 01:20:04.499819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.207 [2024-07-14 01:20:04.499837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.207 [2024-07-14 01:20:04.500084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.207 [2024-07-14 01:20:04.500332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.207 [2024-07-14 01:20:04.500355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.207 [2024-07-14 01:20:04.500370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.207 [2024-07-14 01:20:04.503937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.207 [2024-07-14 01:20:04.513200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.207 [2024-07-14 01:20:04.513654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.207 [2024-07-14 01:20:04.513685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.207 [2024-07-14 01:20:04.513703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.207 [2024-07-14 01:20:04.513950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.207 [2024-07-14 01:20:04.514201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.207 [2024-07-14 01:20:04.514224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.207 [2024-07-14 01:20:04.514239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.207 [2024-07-14 01:20:04.517795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.207 [2024-07-14 01:20:04.527045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.207 [2024-07-14 01:20:04.527500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.207 [2024-07-14 01:20:04.527530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.207 [2024-07-14 01:20:04.527548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.207 [2024-07-14 01:20:04.527785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.207 [2024-07-14 01:20:04.528037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.207 [2024-07-14 01:20:04.528061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.207 [2024-07-14 01:20:04.528076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.207 [2024-07-14 01:20:04.531628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.207 [2024-07-14 01:20:04.540856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.207 [2024-07-14 01:20:04.541338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.208 [2024-07-14 01:20:04.541368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.208 [2024-07-14 01:20:04.541387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.208 [2024-07-14 01:20:04.541624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.208 [2024-07-14 01:20:04.541874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.208 [2024-07-14 01:20:04.541898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.208 [2024-07-14 01:20:04.541913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.208 [2024-07-14 01:20:04.545487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.208 [2024-07-14 01:20:04.554737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.208 [2024-07-14 01:20:04.555207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.208 [2024-07-14 01:20:04.555238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.208 [2024-07-14 01:20:04.555256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.208 [2024-07-14 01:20:04.555493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.208 [2024-07-14 01:20:04.555734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.208 [2024-07-14 01:20:04.555756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.208 [2024-07-14 01:20:04.555771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.208 [2024-07-14 01:20:04.559333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.208 [2024-07-14 01:20:04.568561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.208 [2024-07-14 01:20:04.569021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.208 [2024-07-14 01:20:04.569052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.208 [2024-07-14 01:20:04.569070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.208 [2024-07-14 01:20:04.569308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.208 [2024-07-14 01:20:04.569549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.208 [2024-07-14 01:20:04.569572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.208 [2024-07-14 01:20:04.569587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.208 [2024-07-14 01:20:04.573179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.208 [2024-07-14 01:20:04.582413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.208 [2024-07-14 01:20:04.582877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.208 [2024-07-14 01:20:04.582908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.208 [2024-07-14 01:20:04.582926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.208 [2024-07-14 01:20:04.583164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.208 [2024-07-14 01:20:04.583404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.208 [2024-07-14 01:20:04.583427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.208 [2024-07-14 01:20:04.583443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.208 [2024-07-14 01:20:04.587006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.208 [2024-07-14 01:20:04.596240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.208 [2024-07-14 01:20:04.596711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.208 [2024-07-14 01:20:04.596741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.208 [2024-07-14 01:20:04.596764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.208 [2024-07-14 01:20:04.597014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.208 [2024-07-14 01:20:04.597255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.208 [2024-07-14 01:20:04.597278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.208 [2024-07-14 01:20:04.597294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.208 [2024-07-14 01:20:04.600849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.208 [2024-07-14 01:20:04.610085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.208 [2024-07-14 01:20:04.610537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.208 [2024-07-14 01:20:04.610568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.208 [2024-07-14 01:20:04.610586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.208 [2024-07-14 01:20:04.610823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.208 [2024-07-14 01:20:04.611074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.208 [2024-07-14 01:20:04.611098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.208 [2024-07-14 01:20:04.611113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.208 [2024-07-14 01:20:04.614717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.468 [2024-07-14 01:20:04.623999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.468 [2024-07-14 01:20:04.624485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.468 [2024-07-14 01:20:04.624517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.468 [2024-07-14 01:20:04.624535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.468 [2024-07-14 01:20:04.624775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.468 [2024-07-14 01:20:04.625028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.468 [2024-07-14 01:20:04.625052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.468 [2024-07-14 01:20:04.625068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.468 [2024-07-14 01:20:04.628637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.468 [2024-07-14 01:20:04.637900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.468 [2024-07-14 01:20:04.638358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.468 [2024-07-14 01:20:04.638390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.468 [2024-07-14 01:20:04.638408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.468 [2024-07-14 01:20:04.638646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.468 [2024-07-14 01:20:04.638898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.468 [2024-07-14 01:20:04.638928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.468 [2024-07-14 01:20:04.638944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.468 [2024-07-14 01:20:04.642498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.468 [2024-07-14 01:20:04.651729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.468 [2024-07-14 01:20:04.652199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.468 [2024-07-14 01:20:04.652229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.468 [2024-07-14 01:20:04.652248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.468 [2024-07-14 01:20:04.652485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.468 [2024-07-14 01:20:04.652726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.468 [2024-07-14 01:20:04.652749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.468 [2024-07-14 01:20:04.652764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.468 [2024-07-14 01:20:04.656327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.468 [2024-07-14 01:20:04.665556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.468 [2024-07-14 01:20:04.665994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.468 [2024-07-14 01:20:04.666026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.468 [2024-07-14 01:20:04.666044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.468 [2024-07-14 01:20:04.666286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.468 [2024-07-14 01:20:04.666526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.468 [2024-07-14 01:20:04.666549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.469 [2024-07-14 01:20:04.666564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.469 [2024-07-14 01:20:04.670131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.469 [2024-07-14 01:20:04.679568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.469 [2024-07-14 01:20:04.679999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.469 [2024-07-14 01:20:04.680030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.469 [2024-07-14 01:20:04.680048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.469 [2024-07-14 01:20:04.680286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.469 [2024-07-14 01:20:04.680527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.469 [2024-07-14 01:20:04.680550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.469 [2024-07-14 01:20:04.680565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.469 [2024-07-14 01:20:04.684123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.469 [2024-07-14 01:20:04.693565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.469 [2024-07-14 01:20:04.693995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.469 [2024-07-14 01:20:04.694026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.469 [2024-07-14 01:20:04.694044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.469 [2024-07-14 01:20:04.694281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.469 [2024-07-14 01:20:04.694523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.469 [2024-07-14 01:20:04.694546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.469 [2024-07-14 01:20:04.694561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.469 [2024-07-14 01:20:04.698123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.469 [2024-07-14 01:20:04.707497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.469 [2024-07-14 01:20:04.707945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.469 [2024-07-14 01:20:04.707976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.469 [2024-07-14 01:20:04.707994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.469 [2024-07-14 01:20:04.708231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.469 [2024-07-14 01:20:04.708472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.469 [2024-07-14 01:20:04.708495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.469 [2024-07-14 01:20:04.708511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.469 [2024-07-14 01:20:04.712075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.469 [2024-07-14 01:20:04.721512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.469 [2024-07-14 01:20:04.721968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.469 [2024-07-14 01:20:04.722000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.469 [2024-07-14 01:20:04.722018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.469 [2024-07-14 01:20:04.722255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.469 [2024-07-14 01:20:04.722496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.469 [2024-07-14 01:20:04.722519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.469 [2024-07-14 01:20:04.722534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.469 [2024-07-14 01:20:04.726094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.469 [2024-07-14 01:20:04.735529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.469 [2024-07-14 01:20:04.735988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.469 [2024-07-14 01:20:04.736020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.469 [2024-07-14 01:20:04.736038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.469 [2024-07-14 01:20:04.736281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.469 [2024-07-14 01:20:04.736522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.469 [2024-07-14 01:20:04.736545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.469 [2024-07-14 01:20:04.736560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.469 [2024-07-14 01:20:04.740127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.469 [2024-07-14 01:20:04.749362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.469 [2024-07-14 01:20:04.749826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.469 [2024-07-14 01:20:04.749857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.469 [2024-07-14 01:20:04.749885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.469 [2024-07-14 01:20:04.750125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.469 [2024-07-14 01:20:04.750366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.469 [2024-07-14 01:20:04.750389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.469 [2024-07-14 01:20:04.750404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.469 [2024-07-14 01:20:04.753962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.469 [2024-07-14 01:20:04.763187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.469 [2024-07-14 01:20:04.763623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.469 [2024-07-14 01:20:04.763653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.469 [2024-07-14 01:20:04.763671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.469 [2024-07-14 01:20:04.763920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.469 [2024-07-14 01:20:04.764161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.469 [2024-07-14 01:20:04.764185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.469 [2024-07-14 01:20:04.764200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.469 [2024-07-14 01:20:04.767750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.469 [2024-07-14 01:20:04.777188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.469 [2024-07-14 01:20:04.777595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.469 [2024-07-14 01:20:04.777626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.469 [2024-07-14 01:20:04.777643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.469 [2024-07-14 01:20:04.777891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.469 [2024-07-14 01:20:04.778133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.469 [2024-07-14 01:20:04.778155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.469 [2024-07-14 01:20:04.778176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.469 [2024-07-14 01:20:04.781731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.469 [2024-07-14 01:20:04.791174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.469 [2024-07-14 01:20:04.791631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.469 [2024-07-14 01:20:04.791662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.469 [2024-07-14 01:20:04.791680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.469 [2024-07-14 01:20:04.791928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.469 [2024-07-14 01:20:04.792169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.469 [2024-07-14 01:20:04.792192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.469 [2024-07-14 01:20:04.792207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.469 [2024-07-14 01:20:04.795757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.469 [2024-07-14 01:20:04.804995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.469 [2024-07-14 01:20:04.805428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.469 [2024-07-14 01:20:04.805459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.469 [2024-07-14 01:20:04.805477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.469 [2024-07-14 01:20:04.805714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.469 [2024-07-14 01:20:04.805966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.469 [2024-07-14 01:20:04.805990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.469 [2024-07-14 01:20:04.806005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.469 [2024-07-14 01:20:04.809555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.469 [2024-07-14 01:20:04.818992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.469 [2024-07-14 01:20:04.819448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.469 [2024-07-14 01:20:04.819479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.469 [2024-07-14 01:20:04.819496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.469 [2024-07-14 01:20:04.819734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.470 [2024-07-14 01:20:04.819986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.470 [2024-07-14 01:20:04.820010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.470 [2024-07-14 01:20:04.820025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.470 [2024-07-14 01:20:04.823574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.470 [2024-07-14 01:20:04.833014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.470 [2024-07-14 01:20:04.833444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.470 [2024-07-14 01:20:04.833480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.470 [2024-07-14 01:20:04.833499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.470 [2024-07-14 01:20:04.833738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.470 [2024-07-14 01:20:04.833990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.470 [2024-07-14 01:20:04.834015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.470 [2024-07-14 01:20:04.834030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.470 [2024-07-14 01:20:04.837581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.470 [2024-07-14 01:20:04.847038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.470 [2024-07-14 01:20:04.847469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.470 [2024-07-14 01:20:04.847499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.470 [2024-07-14 01:20:04.847516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.470 [2024-07-14 01:20:04.847753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.470 [2024-07-14 01:20:04.848010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.470 [2024-07-14 01:20:04.848034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.470 [2024-07-14 01:20:04.848049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.470 [2024-07-14 01:20:04.851602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.470 [2024-07-14 01:20:04.861044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.470 [2024-07-14 01:20:04.861500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.470 [2024-07-14 01:20:04.861530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.470 [2024-07-14 01:20:04.861548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.470 [2024-07-14 01:20:04.861785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.470 [2024-07-14 01:20:04.862037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.470 [2024-07-14 01:20:04.862061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.470 [2024-07-14 01:20:04.862077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.470 [2024-07-14 01:20:04.865628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.470 [2024-07-14 01:20:04.874855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.470 [2024-07-14 01:20:04.875323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.470 [2024-07-14 01:20:04.875354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.470 [2024-07-14 01:20:04.875372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.470 [2024-07-14 01:20:04.875608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.470 [2024-07-14 01:20:04.875856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.470 [2024-07-14 01:20:04.875889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.470 [2024-07-14 01:20:04.875906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.470 [2024-07-14 01:20:04.879590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.730 [2024-07-14 01:20:04.888816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.730 [2024-07-14 01:20:04.889262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-14 01:20:04.889294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.730 [2024-07-14 01:20:04.889313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.730 [2024-07-14 01:20:04.889550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.730 [2024-07-14 01:20:04.889790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.730 [2024-07-14 01:20:04.889813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.730 [2024-07-14 01:20:04.889828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.730 [2024-07-14 01:20:04.893435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.730 [2024-07-14 01:20:04.902662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.730 [2024-07-14 01:20:04.903103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.730 [2024-07-14 01:20:04.903134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.730 [2024-07-14 01:20:04.903152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.730 [2024-07-14 01:20:04.903389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.731 [2024-07-14 01:20:04.903630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.731 [2024-07-14 01:20:04.903653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.731 [2024-07-14 01:20:04.903668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.731 [2024-07-14 01:20:04.907230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.731 [2024-07-14 01:20:04.916665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.731 [2024-07-14 01:20:04.917130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-14 01:20:04.917161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.731 [2024-07-14 01:20:04.917179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.731 [2024-07-14 01:20:04.917416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.731 [2024-07-14 01:20:04.917657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.731 [2024-07-14 01:20:04.917680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.731 [2024-07-14 01:20:04.917695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.731 [2024-07-14 01:20:04.921265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.731 [2024-07-14 01:20:04.930499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.731 [2024-07-14 01:20:04.930956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-14 01:20:04.930988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.731 [2024-07-14 01:20:04.931006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.731 [2024-07-14 01:20:04.931244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.731 [2024-07-14 01:20:04.931485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.731 [2024-07-14 01:20:04.931507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.731 [2024-07-14 01:20:04.931522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.731 [2024-07-14 01:20:04.935085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.731 [2024-07-14 01:20:04.944519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.731 [2024-07-14 01:20:04.944975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-14 01:20:04.945007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.731 [2024-07-14 01:20:04.945025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.731 [2024-07-14 01:20:04.945262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.731 [2024-07-14 01:20:04.945504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.731 [2024-07-14 01:20:04.945527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.731 [2024-07-14 01:20:04.945542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.731 [2024-07-14 01:20:04.949110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.731 [2024-07-14 01:20:04.958334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.731 [2024-07-14 01:20:04.958794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-14 01:20:04.958824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.731 [2024-07-14 01:20:04.958841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.731 [2024-07-14 01:20:04.959088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.731 [2024-07-14 01:20:04.959329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.731 [2024-07-14 01:20:04.959352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.731 [2024-07-14 01:20:04.959367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.731 [2024-07-14 01:20:04.962925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.731 [2024-07-14 01:20:04.972149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.731 [2024-07-14 01:20:04.972603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-14 01:20:04.972634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.731 [2024-07-14 01:20:04.972658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.731 [2024-07-14 01:20:04.972907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.731 [2024-07-14 01:20:04.973149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.731 [2024-07-14 01:20:04.973172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.731 [2024-07-14 01:20:04.973187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.731 [2024-07-14 01:20:04.976739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.731 [2024-07-14 01:20:04.985974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.731 [2024-07-14 01:20:04.986408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-14 01:20:04.986439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.731 [2024-07-14 01:20:04.986457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.731 [2024-07-14 01:20:04.986694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.731 [2024-07-14 01:20:04.986947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.731 [2024-07-14 01:20:04.986971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.731 [2024-07-14 01:20:04.986986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.731 [2024-07-14 01:20:04.990536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.731 [2024-07-14 01:20:04.999982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.731 [2024-07-14 01:20:05.000426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-14 01:20:05.000457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.731 [2024-07-14 01:20:05.000474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.731 [2024-07-14 01:20:05.000711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.731 [2024-07-14 01:20:05.000964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.731 [2024-07-14 01:20:05.000988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.731 [2024-07-14 01:20:05.001003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.731 [2024-07-14 01:20:05.004556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.731 [2024-07-14 01:20:05.014003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.731 [2024-07-14 01:20:05.014468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-14 01:20:05.014500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.731 [2024-07-14 01:20:05.014518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.731 [2024-07-14 01:20:05.014755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.731 [2024-07-14 01:20:05.015007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.731 [2024-07-14 01:20:05.015037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.731 [2024-07-14 01:20:05.015053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.731 [2024-07-14 01:20:05.018606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.731 [2024-07-14 01:20:05.027836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.731 [2024-07-14 01:20:05.028299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-14 01:20:05.028329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.731 [2024-07-14 01:20:05.028347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.731 [2024-07-14 01:20:05.028585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.731 [2024-07-14 01:20:05.028825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.731 [2024-07-14 01:20:05.028848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.731 [2024-07-14 01:20:05.028863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.731 [2024-07-14 01:20:05.032431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.731 [2024-07-14 01:20:05.041663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.731 [2024-07-14 01:20:05.042135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-14 01:20:05.042166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.731 [2024-07-14 01:20:05.042184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.731 [2024-07-14 01:20:05.042421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.731 [2024-07-14 01:20:05.042662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.731 [2024-07-14 01:20:05.042685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.731 [2024-07-14 01:20:05.042700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.731 [2024-07-14 01:20:05.046267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.731 [2024-07-14 01:20:05.055493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.731 [2024-07-14 01:20:05.055953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.731 [2024-07-14 01:20:05.055984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.732 [2024-07-14 01:20:05.056002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.732 [2024-07-14 01:20:05.056239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.732 [2024-07-14 01:20:05.056481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.732 [2024-07-14 01:20:05.056504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.732 [2024-07-14 01:20:05.056519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.732 [2024-07-14 01:20:05.060080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.732 [2024-07-14 01:20:05.069308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.732 [2024-07-14 01:20:05.069742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-14 01:20:05.069774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.732 [2024-07-14 01:20:05.069792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.732 [2024-07-14 01:20:05.070042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.732 [2024-07-14 01:20:05.070283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.732 [2024-07-14 01:20:05.070306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.732 [2024-07-14 01:20:05.070322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.732 [2024-07-14 01:20:05.073880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.732 [2024-07-14 01:20:05.083308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.732 [2024-07-14 01:20:05.083744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-14 01:20:05.083775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.732 [2024-07-14 01:20:05.083793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.732 [2024-07-14 01:20:05.084041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.732 [2024-07-14 01:20:05.084282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.732 [2024-07-14 01:20:05.084305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.732 [2024-07-14 01:20:05.084321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.732 [2024-07-14 01:20:05.087880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.732 [2024-07-14 01:20:05.097308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.732 [2024-07-14 01:20:05.097740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-14 01:20:05.097770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.732 [2024-07-14 01:20:05.097788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.732 [2024-07-14 01:20:05.098037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.732 [2024-07-14 01:20:05.098278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.732 [2024-07-14 01:20:05.098301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.732 [2024-07-14 01:20:05.098316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.732 [2024-07-14 01:20:05.101872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.732 [2024-07-14 01:20:05.111304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.732 [2024-07-14 01:20:05.111751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-14 01:20:05.111781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.732 [2024-07-14 01:20:05.111799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.732 [2024-07-14 01:20:05.112054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.732 [2024-07-14 01:20:05.112295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.732 [2024-07-14 01:20:05.112318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.732 [2024-07-14 01:20:05.112333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.732 [2024-07-14 01:20:05.115891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.732 [2024-07-14 01:20:05.125117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.732 [2024-07-14 01:20:05.125556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-14 01:20:05.125586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.732 [2024-07-14 01:20:05.125604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.732 [2024-07-14 01:20:05.125841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.732 [2024-07-14 01:20:05.126091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.732 [2024-07-14 01:20:05.126115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.732 [2024-07-14 01:20:05.126130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.732 [2024-07-14 01:20:05.129683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.732 [2024-07-14 01:20:05.139190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.732 [2024-07-14 01:20:05.139630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.732 [2024-07-14 01:20:05.139662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.732 [2024-07-14 01:20:05.139680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.732 [2024-07-14 01:20:05.139930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.732 [2024-07-14 01:20:05.140189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.732 [2024-07-14 01:20:05.140215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.732 [2024-07-14 01:20:05.140230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.991 [2024-07-14 01:20:05.143894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.992 [2024-07-14 01:20:05.153072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.992 [2024-07-14 01:20:05.153509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.992 [2024-07-14 01:20:05.153540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.992 [2024-07-14 01:20:05.153558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.992 [2024-07-14 01:20:05.153797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.992 [2024-07-14 01:20:05.154050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.992 [2024-07-14 01:20:05.154074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.992 [2024-07-14 01:20:05.154095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.992 [2024-07-14 01:20:05.157650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.992 [2024-07-14 01:20:05.167092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.992 [2024-07-14 01:20:05.167561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.992 [2024-07-14 01:20:05.167592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.992 [2024-07-14 01:20:05.167610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.992 [2024-07-14 01:20:05.167848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.992 [2024-07-14 01:20:05.168098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.992 [2024-07-14 01:20:05.168122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.992 [2024-07-14 01:20:05.168137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.992 [2024-07-14 01:20:05.171693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.992 [2024-07-14 01:20:05.180938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.992 [2024-07-14 01:20:05.181346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.992 [2024-07-14 01:20:05.181376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.992 [2024-07-14 01:20:05.181394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.992 [2024-07-14 01:20:05.181631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.992 [2024-07-14 01:20:05.181882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.992 [2024-07-14 01:20:05.181906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.992 [2024-07-14 01:20:05.181921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.992 [2024-07-14 01:20:05.185475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.992 [2024-07-14 01:20:05.194926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.992 [2024-07-14 01:20:05.195397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.992 [2024-07-14 01:20:05.195427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.992 [2024-07-14 01:20:05.195445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.992 [2024-07-14 01:20:05.195682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.992 [2024-07-14 01:20:05.195935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.992 [2024-07-14 01:20:05.195958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.992 [2024-07-14 01:20:05.195974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.992 [2024-07-14 01:20:05.199524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.992 [2024-07-14 01:20:05.208756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.992 [2024-07-14 01:20:05.209197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.992 [2024-07-14 01:20:05.209228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.992 [2024-07-14 01:20:05.209246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.992 [2024-07-14 01:20:05.209484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.992 [2024-07-14 01:20:05.209725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.992 [2024-07-14 01:20:05.209748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.992 [2024-07-14 01:20:05.209764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.992 [2024-07-14 01:20:05.213325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.992 [2024-07-14 01:20:05.222758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.992 [2024-07-14 01:20:05.223223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.992 [2024-07-14 01:20:05.223253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.992 [2024-07-14 01:20:05.223272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.992 [2024-07-14 01:20:05.223509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.992 [2024-07-14 01:20:05.223750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.992 [2024-07-14 01:20:05.223773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.992 [2024-07-14 01:20:05.223790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.992 [2024-07-14 01:20:05.227352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.992 [2024-07-14 01:20:05.236576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.992 [2024-07-14 01:20:05.237019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.992 [2024-07-14 01:20:05.237050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.992 [2024-07-14 01:20:05.237068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.992 [2024-07-14 01:20:05.237306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.992 [2024-07-14 01:20:05.237547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.992 [2024-07-14 01:20:05.237570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.992 [2024-07-14 01:20:05.237586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.992 [2024-07-14 01:20:05.241147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.992 [2024-07-14 01:20:05.250588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.992 [2024-07-14 01:20:05.251056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.992 [2024-07-14 01:20:05.251086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.992 [2024-07-14 01:20:05.251104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.992 [2024-07-14 01:20:05.251342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.992 [2024-07-14 01:20:05.251594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.992 [2024-07-14 01:20:05.251617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.992 [2024-07-14 01:20:05.251633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.992 [2024-07-14 01:20:05.255196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.992 [2024-07-14 01:20:05.264420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.992 [2024-07-14 01:20:05.264886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.992 [2024-07-14 01:20:05.264918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.992 [2024-07-14 01:20:05.264935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.992 [2024-07-14 01:20:05.265173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.992 [2024-07-14 01:20:05.265414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.992 [2024-07-14 01:20:05.265437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.992 [2024-07-14 01:20:05.265453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.992 [2024-07-14 01:20:05.269014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.992 [2024-07-14 01:20:05.278239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.992 [2024-07-14 01:20:05.278698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.992 [2024-07-14 01:20:05.278729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.992 [2024-07-14 01:20:05.278747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.992 [2024-07-14 01:20:05.278995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.992 [2024-07-14 01:20:05.279237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.992 [2024-07-14 01:20:05.279260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.992 [2024-07-14 01:20:05.279275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.992 [2024-07-14 01:20:05.282825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.992 [2024-07-14 01:20:05.292053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.992 [2024-07-14 01:20:05.292514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.992 [2024-07-14 01:20:05.292544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.992 [2024-07-14 01:20:05.292562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.992 [2024-07-14 01:20:05.292800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.992 [2024-07-14 01:20:05.293051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.993 [2024-07-14 01:20:05.293075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.993 [2024-07-14 01:20:05.293091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.993 [2024-07-14 01:20:05.296650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.993 [2024-07-14 01:20:05.305881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.993 [2024-07-14 01:20:05.306310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.993 [2024-07-14 01:20:05.306341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.993 [2024-07-14 01:20:05.306359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.993 [2024-07-14 01:20:05.306596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.993 [2024-07-14 01:20:05.306837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.993 [2024-07-14 01:20:05.306860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.993 [2024-07-14 01:20:05.306887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.993 [2024-07-14 01:20:05.310439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.993 [2024-07-14 01:20:05.319873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.993 [2024-07-14 01:20:05.320329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.993 [2024-07-14 01:20:05.320360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.993 [2024-07-14 01:20:05.320378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.993 [2024-07-14 01:20:05.320615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.993 [2024-07-14 01:20:05.320856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.993 [2024-07-14 01:20:05.320890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.993 [2024-07-14 01:20:05.320906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.993 [2024-07-14 01:20:05.324456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.993 [2024-07-14 01:20:05.333686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.993 [2024-07-14 01:20:05.334131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.993 [2024-07-14 01:20:05.334162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.993 [2024-07-14 01:20:05.334180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.993 [2024-07-14 01:20:05.334417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.993 [2024-07-14 01:20:05.334657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.993 [2024-07-14 01:20:05.334681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.993 [2024-07-14 01:20:05.334696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.993 [2024-07-14 01:20:05.338258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.993 [2024-07-14 01:20:05.347698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.993 [2024-07-14 01:20:05.348159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.993 [2024-07-14 01:20:05.348190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.993 [2024-07-14 01:20:05.348214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.993 [2024-07-14 01:20:05.348453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.993 [2024-07-14 01:20:05.348694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.993 [2024-07-14 01:20:05.348717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.993 [2024-07-14 01:20:05.348732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.993 [2024-07-14 01:20:05.352292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.993 [2024-07-14 01:20:05.361734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.993 [2024-07-14 01:20:05.362173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.993 [2024-07-14 01:20:05.362204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.993 [2024-07-14 01:20:05.362222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.993 [2024-07-14 01:20:05.362459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.993 [2024-07-14 01:20:05.362700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.993 [2024-07-14 01:20:05.362723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.993 [2024-07-14 01:20:05.362738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.993 [2024-07-14 01:20:05.366301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.993 [2024-07-14 01:20:05.375739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.993 [2024-07-14 01:20:05.376182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.993 [2024-07-14 01:20:05.376213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.993 [2024-07-14 01:20:05.376231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.993 [2024-07-14 01:20:05.376468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.993 [2024-07-14 01:20:05.376710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.993 [2024-07-14 01:20:05.376733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.993 [2024-07-14 01:20:05.376748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.993 [2024-07-14 01:20:05.380306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.993 [2024-07-14 01:20:05.389743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.993 [2024-07-14 01:20:05.390204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:15.993 [2024-07-14 01:20:05.390235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:15.993 [2024-07-14 01:20:05.390253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:15.993 [2024-07-14 01:20:05.390490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:15.993 [2024-07-14 01:20:05.390731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:15.993 [2024-07-14 01:20:05.390759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:15.993 [2024-07-14 01:20:05.390775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:15.993 [2024-07-14 01:20:05.394334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:15.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1296487 Killed "${NVMF_APP[@]}" "$@" 00:34:15.993 01:20:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:15.993 01:20:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:15.993 01:20:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:15.993 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:15.993 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:15.993 01:20:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1297444 00:34:15.993 01:20:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:15.993 01:20:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1297444 00:34:15.993 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1297444 ']' 00:34:15.993 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.993 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:15.993 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.993 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:15.993 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:15.993 [2024-07-14 01:20:05.403705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.253 [2024-07-14 01:20:05.406632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.253 [2024-07-14 01:20:05.406670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.253 [2024-07-14 01:20:05.406691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.253 [2024-07-14 01:20:05.406944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.253 [2024-07-14 01:20:05.407195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.253 [2024-07-14 01:20:05.407218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.253 [2024-07-14 01:20:05.407233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.253 [2024-07-14 01:20:05.410911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.253 [2024-07-14 01:20:05.417650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.253 [2024-07-14 01:20:05.418133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.253 [2024-07-14 01:20:05.418166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.253 [2024-07-14 01:20:05.418186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.253 [2024-07-14 01:20:05.418425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.253 [2024-07-14 01:20:05.418667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.253 [2024-07-14 01:20:05.418698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.253 [2024-07-14 01:20:05.418715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.253 [2024-07-14 01:20:05.422296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.253 [2024-07-14 01:20:05.431544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.253 [2024-07-14 01:20:05.431979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.253 [2024-07-14 01:20:05.432012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.253 [2024-07-14 01:20:05.432031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.253 [2024-07-14 01:20:05.432269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.253 [2024-07-14 01:20:05.432511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.253 [2024-07-14 01:20:05.432535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.253 [2024-07-14 01:20:05.432551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.253 [2024-07-14 01:20:05.436126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.253 [2024-07-14 01:20:05.445367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.253 [2024-07-14 01:20:05.445834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.253 [2024-07-14 01:20:05.445893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.253 [2024-07-14 01:20:05.445925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.253 [2024-07-14 01:20:05.446211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.253 [2024-07-14 01:20:05.446478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.253 [2024-07-14 01:20:05.446505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.253 [2024-07-14 01:20:05.446531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.253 [2024-07-14 01:20:05.450141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.253 [2024-07-14 01:20:05.450676] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:16.253 [2024-07-14 01:20:05.450753] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:16.253 [2024-07-14 01:20:05.459396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.253 [2024-07-14 01:20:05.459881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.253 [2024-07-14 01:20:05.459917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.253 [2024-07-14 01:20:05.459949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.253 [2024-07-14 01:20:05.460233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.253 [2024-07-14 01:20:05.460498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.253 [2024-07-14 01:20:05.460526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.253 [2024-07-14 01:20:05.460561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.253 [2024-07-14 01:20:05.464190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.253 [2024-07-14 01:20:05.473451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.253 [2024-07-14 01:20:05.473958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.253 [2024-07-14 01:20:05.473994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.253 [2024-07-14 01:20:05.474026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.253 [2024-07-14 01:20:05.474309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.253 [2024-07-14 01:20:05.474574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.253 [2024-07-14 01:20:05.474601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.253 [2024-07-14 01:20:05.474627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.253 [2024-07-14 01:20:05.478248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.253 EAL: No free 2048 kB hugepages reported on node 1 00:34:16.253 [2024-07-14 01:20:05.487315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.253 [2024-07-14 01:20:05.487771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.253 [2024-07-14 01:20:05.487806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.253 [2024-07-14 01:20:05.487836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.253 [2024-07-14 01:20:05.488129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.253 [2024-07-14 01:20:05.488397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.253 [2024-07-14 01:20:05.488423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.254 [2024-07-14 01:20:05.488449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.254 [2024-07-14 01:20:05.492086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.254 [2024-07-14 01:20:05.501164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.254 [2024-07-14 01:20:05.501659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.254 [2024-07-14 01:20:05.501694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.254 [2024-07-14 01:20:05.501724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.254 [2024-07-14 01:20:05.502021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.254 [2024-07-14 01:20:05.502288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.254 [2024-07-14 01:20:05.502315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.254 [2024-07-14 01:20:05.502341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.254 [2024-07-14 01:20:05.505953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.254 [2024-07-14 01:20:05.515021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.254 [2024-07-14 01:20:05.515488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.254 [2024-07-14 01:20:05.515524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.254 [2024-07-14 01:20:05.515555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.254 [2024-07-14 01:20:05.515836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.254 [2024-07-14 01:20:05.516118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.254 [2024-07-14 01:20:05.516145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.254 [2024-07-14 01:20:05.516172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.254 [2024-07-14 01:20:05.519786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.254 [2024-07-14 01:20:05.526875] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:16.254 [2024-07-14 01:20:05.528910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.254 [2024-07-14 01:20:05.529403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.254 [2024-07-14 01:20:05.529449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.254 [2024-07-14 01:20:05.529480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.254 [2024-07-14 01:20:05.529766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.254 [2024-07-14 01:20:05.530047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.254 [2024-07-14 01:20:05.530075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.254 [2024-07-14 01:20:05.530100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.254 [2024-07-14 01:20:05.533801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.254 [2024-07-14 01:20:05.542954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.254 [2024-07-14 01:20:05.543640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.254 [2024-07-14 01:20:05.543685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.254 [2024-07-14 01:20:05.543730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.254 [2024-07-14 01:20:05.544051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.254 [2024-07-14 01:20:05.544323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.254 [2024-07-14 01:20:05.544351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.254 [2024-07-14 01:20:05.544380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.254 [2024-07-14 01:20:05.548031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.254 [2024-07-14 01:20:05.556891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.254 [2024-07-14 01:20:05.557419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.254 [2024-07-14 01:20:05.557455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.254 [2024-07-14 01:20:05.557486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.254 [2024-07-14 01:20:05.557777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.254 [2024-07-14 01:20:05.558059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.254 [2024-07-14 01:20:05.558087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.254 [2024-07-14 01:20:05.558113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.254 [2024-07-14 01:20:05.561721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.254 [2024-07-14 01:20:05.570779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.254 [2024-07-14 01:20:05.571289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.254 [2024-07-14 01:20:05.571333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.254 [2024-07-14 01:20:05.571364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.254 [2024-07-14 01:20:05.571646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.254 [2024-07-14 01:20:05.571934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.254 [2024-07-14 01:20:05.571962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.254 [2024-07-14 01:20:05.571988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.254 [2024-07-14 01:20:05.575594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.254 [2024-07-14 01:20:05.584729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.254 [2024-07-14 01:20:05.585479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.254 [2024-07-14 01:20:05.585538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.254 [2024-07-14 01:20:05.585575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.254 [2024-07-14 01:20:05.585887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.254 [2024-07-14 01:20:05.586163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.254 [2024-07-14 01:20:05.586198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.254 [2024-07-14 01:20:05.586227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.254 [2024-07-14 01:20:05.589941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.254 [2024-07-14 01:20:05.598823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.254 [2024-07-14 01:20:05.599416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.254 [2024-07-14 01:20:05.599466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.254 [2024-07-14 01:20:05.599500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.254 [2024-07-14 01:20:05.599784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.254 [2024-07-14 01:20:05.600077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.254 [2024-07-14 01:20:05.600105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.254 [2024-07-14 01:20:05.600148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.254 [2024-07-14 01:20:05.603754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.254 [2024-07-14 01:20:05.612834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.254 [2024-07-14 01:20:05.613340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.254 [2024-07-14 01:20:05.613377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.254 [2024-07-14 01:20:05.613407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.254 [2024-07-14 01:20:05.613692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.254 [2024-07-14 01:20:05.613969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.254 [2024-07-14 01:20:05.613997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.254 [2024-07-14 01:20:05.614022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.254 [2024-07-14 01:20:05.617629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.254 [2024-07-14 01:20:05.624281] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:16.254 [2024-07-14 01:20:05.624318] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:16.254 [2024-07-14 01:20:05.624335] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:16.254 [2024-07-14 01:20:05.624348] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:16.254 [2024-07-14 01:20:05.624360] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:16.254 [2024-07-14 01:20:05.624423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:16.254 [2024-07-14 01:20:05.624477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:16.254 [2024-07-14 01:20:05.624481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.254 [2024-07-14 01:20:05.626816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.254 [2024-07-14 01:20:05.627311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.254 [2024-07-14 01:20:05.627346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.254 [2024-07-14 01:20:05.627375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.254 [2024-07-14 01:20:05.627639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.254 [2024-07-14 01:20:05.627899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.254 [2024-07-14 01:20:05.627924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.254 [2024-07-14 01:20:05.627949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.254 [2024-07-14 01:20:05.631274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.254 [2024-07-14 01:20:05.640509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.254 [2024-07-14 01:20:05.641169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.254 [2024-07-14 01:20:05.641211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.254 [2024-07-14 01:20:05.641245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.254 [2024-07-14 01:20:05.641530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.254 [2024-07-14 01:20:05.641787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.254 [2024-07-14 01:20:05.641812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.254 [2024-07-14 01:20:05.641838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.254 [2024-07-14 01:20:05.645217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.254 [2024-07-14 01:20:05.654000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.254 [2024-07-14 01:20:05.654628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.254 [2024-07-14 01:20:05.654671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.254 [2024-07-14 01:20:05.654704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.254 [2024-07-14 01:20:05.655019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.254 [2024-07-14 01:20:05.655281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.254 [2024-07-14 01:20:05.655305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.254 [2024-07-14 01:20:05.655330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.254 [2024-07-14 01:20:05.658502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.514 [2024-07-14 01:20:05.667810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.514 [2024-07-14 01:20:05.668515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.514 [2024-07-14 01:20:05.668568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.514 [2024-07-14 01:20:05.668601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.514 [2024-07-14 01:20:05.668918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.514 [2024-07-14 01:20:05.669172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.514 [2024-07-14 01:20:05.669196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.514 [2024-07-14 01:20:05.669222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.515 [2024-07-14 01:20:05.672681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.515 [2024-07-14 01:20:05.681435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.515 [2024-07-14 01:20:05.681957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 01:20:05.681995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.515 [2024-07-14 01:20:05.682027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.515 [2024-07-14 01:20:05.682317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.515 [2024-07-14 01:20:05.682545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.515 [2024-07-14 01:20:05.682569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.515 [2024-07-14 01:20:05.682605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.515 [2024-07-14 01:20:05.685783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.515 [2024-07-14 01:20:05.694987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.515 [2024-07-14 01:20:05.695615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 01:20:05.695658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.515 [2024-07-14 01:20:05.695690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.515 [2024-07-14 01:20:05.695997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.515 [2024-07-14 01:20:05.696247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.515 [2024-07-14 01:20:05.696271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.515 [2024-07-14 01:20:05.696296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.515 [2024-07-14 01:20:05.699467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.515 [2024-07-14 01:20:05.708543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.515 [2024-07-14 01:20:05.709019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 01:20:05.709054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.515 [2024-07-14 01:20:05.709084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.515 [2024-07-14 01:20:05.709361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.515 [2024-07-14 01:20:05.709586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.515 [2024-07-14 01:20:05.709609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.515 [2024-07-14 01:20:05.709632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.515 [2024-07-14 01:20:05.712917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.515 [2024-07-14 01:20:05.722045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.515 [2024-07-14 01:20:05.722490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 01:20:05.722522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.515 [2024-07-14 01:20:05.722550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.515 [2024-07-14 01:20:05.722814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.515 [2024-07-14 01:20:05.723080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.515 [2024-07-14 01:20:05.723104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.515 [2024-07-14 01:20:05.723127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.515 [2024-07-14 01:20:05.726409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.515 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:16.515 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:16.515 01:20:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:16.515 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:16.515 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:16.515 [2024-07-14 01:20:05.735571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.515 [2024-07-14 01:20:05.735989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 01:20:05.736031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.515 [2024-07-14 01:20:05.736059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.515 [2024-07-14 01:20:05.736336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.515 [2024-07-14 01:20:05.736561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.515 [2024-07-14 01:20:05.736584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.515 [2024-07-14 01:20:05.736605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.515 [2024-07-14 01:20:05.739879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.515 [2024-07-14 01:20:05.749102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.515 [2024-07-14 01:20:05.749582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 01:20:05.749613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.515 [2024-07-14 01:20:05.749641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.515 [2024-07-14 01:20:05.749940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.515 [2024-07-14 01:20:05.750171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.515 [2024-07-14 01:20:05.750195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.515 [2024-07-14 01:20:05.750230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.515 01:20:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.515 01:20:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:16.515 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.515 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:16.515 [2024-07-14 01:20:05.753403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.515 [2024-07-14 01:20:05.757548] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:16.515 [2024-07-14 01:20:05.762531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.515 [2024-07-14 01:20:05.762994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 01:20:05.763025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.515 [2024-07-14 01:20:05.763052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.515 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.515 01:20:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:16.515 [2024-07-14 01:20:05.763318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.515 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.515 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:16.515 [2024-07-14 01:20:05.763549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.515 [2024-07-14 01:20:05.763573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.515 [2024-07-14 01:20:05.763594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.515 [2024-07-14 01:20:05.766892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.515 [2024-07-14 01:20:05.776092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.515 [2024-07-14 01:20:05.776568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 01:20:05.776599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.515 [2024-07-14 01:20:05.776626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.515 [2024-07-14 01:20:05.776928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.515 [2024-07-14 01:20:05.777159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.515 [2024-07-14 01:20:05.777205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.515 [2024-07-14 01:20:05.777226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.515 [2024-07-14 01:20:05.780413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.515 [2024-07-14 01:20:05.789561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.515 [2024-07-14 01:20:05.790166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.515 [2024-07-14 01:20:05.790210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.515 [2024-07-14 01:20:05.790243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.515 [2024-07-14 01:20:05.790520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.515 [2024-07-14 01:20:05.790748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.515 [2024-07-14 01:20:05.790772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.515 [2024-07-14 01:20:05.790797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.515 [2024-07-14 01:20:05.794040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.515 Malloc0 00:34:16.515 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.515 01:20:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:16.515 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.515 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:16.515 [2024-07-14 01:20:05.803134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.515 [2024-07-14 01:20:05.803582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.516 [2024-07-14 01:20:05.803614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x925f70 with addr=10.0.0.2, port=4420 00:34:16.516 [2024-07-14 01:20:05.803641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925f70 is same with the state(5) to be set 00:34:16.516 [2024-07-14 01:20:05.803903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x925f70 (9): Bad file descriptor 00:34:16.516 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.516 01:20:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:16.516 [2024-07-14 01:20:05.804166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:16.516 [2024-07-14 01:20:05.804190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:16.516 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.516 [2024-07-14 01:20:05.804212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:16.516 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:16.516 [2024-07-14 01:20:05.807520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.516 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.516 01:20:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:16.516 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.516 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:16.516 [2024-07-14 01:20:05.815815] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.516 [2024-07-14 01:20:05.816705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:16.516 01:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.516 01:20:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1296775 00:34:16.775 [2024-07-14 01:20:05.934641] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:26.761 00:34:26.761 Latency(us) 00:34:26.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:26.761 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:26.761 Verification LBA range: start 0x0 length 0x4000 00:34:26.761 Nvme1n1 : 15.01 6711.04 26.21 8676.02 0.00 8294.29 849.54 22136.60 00:34:26.761 =================================================================================================================== 00:34:26.761 Total : 6711.04 26.21 8676.02 0.00 8294.29 849.54 22136.60 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:26.761 rmmod nvme_tcp 00:34:26.761 rmmod nvme_fabrics 00:34:26.761 rmmod nvme_keyring 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1297444 ']' 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1297444 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1297444 ']' 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1297444 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1297444 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1297444' 00:34:26.761 killing process with pid 1297444 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1297444 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1297444 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:26.761 01:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:26.762 01:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:26.762 01:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:26.762 01:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:26.762 01:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:26.762 01:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.143 01:20:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:28.143 00:34:28.143 real 0m22.511s 00:34:28.143 user 1m0.474s 00:34:28.143 sys 0m4.169s 00:34:28.143 01:20:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:28.143 01:20:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:28.143 ************************************ 00:34:28.143 END TEST nvmf_bdevperf 00:34:28.143 ************************************ 00:34:28.143 01:20:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:28.143 01:20:17 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:28.143 01:20:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:28.143 01:20:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:28.143 01:20:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:28.143 ************************************ 00:34:28.143 START TEST nvmf_target_disconnect 00:34:28.143 ************************************ 00:34:28.143 01:20:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:28.402 * Looking for test storage... 00:34:28.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:28.402 01:20:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:30.302 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:30.303 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:30.303 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:30.303 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:30.303 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:30.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:30.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:34:30.303 00:34:30.303 --- 10.0.0.2 ping statistics --- 00:34:30.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.303 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:30.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:30.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:34:30.303 00:34:30.303 --- 10.0.0.1 ping statistics --- 00:34:30.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.303 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:30.303 ************************************ 00:34:30.303 START TEST nvmf_target_disconnect_tc1 00:34:30.303 ************************************ 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:30.303 EAL: No free 2048 kB hugepages reported on node 1 00:34:30.303 [2024-07-14 01:20:19.696695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.303 [2024-07-14 01:20:19.696779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85d590 with addr=10.0.0.2, port=4420 00:34:30.303 [2024-07-14 01:20:19.696834] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:30.303 [2024-07-14 01:20:19.696882] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:30.303 [2024-07-14 01:20:19.696911] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:30.303 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:30.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:30.303 Initializing NVMe Controllers 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:30.303 00:34:30.303 real 0m0.094s 00:34:30.303 user 0m0.039s 00:34:30.303 sys 0m0.054s 00:34:30.303 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:30.304 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:30.304 ************************************ 00:34:30.304 END TEST nvmf_target_disconnect_tc1 00:34:30.304 ************************************ 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:30.563 ************************************ 00:34:30.563 START TEST nvmf_target_disconnect_tc2 00:34:30.563 ************************************ 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1300583 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1300583 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1300583 ']' 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:30.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:30.563 01:20:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.563 [2024-07-14 01:20:19.801933] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:30.563 [2024-07-14 01:20:19.802016] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:30.563 EAL: No free 2048 kB hugepages reported on node 1 00:34:30.563 [2024-07-14 01:20:19.867399] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:30.563 [2024-07-14 01:20:19.958007] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:30.563 [2024-07-14 01:20:19.958075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:30.563 [2024-07-14 01:20:19.958090] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:30.563 [2024-07-14 01:20:19.958101] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:30.563 [2024-07-14 01:20:19.958111] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:30.563 [2024-07-14 01:20:19.958268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:30.563 [2024-07-14 01:20:19.958329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:30.563 [2024-07-14 01:20:19.958351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:30.563 [2024-07-14 01:20:19.958355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.822 Malloc0 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.822 [2024-07-14 01:20:20.134125] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.822 [2024-07-14 01:20:20.162369] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.822 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1300615 00:34:30.823 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:30.823 01:20:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:30.823 EAL: No free 2048 kB hugepages reported on node 1 00:34:33.381 01:20:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1300583 00:34:33.381 01:20:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 [2024-07-14 01:20:22.186571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 [2024-07-14 01:20:22.186899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Write completed with error (sct=0, sc=8) 00:34:33.381 starting I/O failed 00:34:33.381 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 [2024-07-14 01:20:22.187191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Write completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 Read completed with error (sct=0, sc=8) 00:34:33.382 starting I/O failed 00:34:33.382 [2024-07-14 01:20:22.187540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:33.382 [2024-07-14 01:20:22.187806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.187851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.188064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.188093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.188305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.188332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.188512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.188539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.188716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.188743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.188935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.188963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.189119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.189147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.189423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.189474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.189838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.189900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.190081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.190107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.190286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.190329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.190512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.190539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.190834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.190861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.191049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.191077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.191283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.191309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.191508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.191535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.191755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.191784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.191992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.192020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.192201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.192228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.382 [2024-07-14 01:20:22.192406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.382 [2024-07-14 01:20:22.192448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.382 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.192710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.192736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.192973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.193001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.193180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.193207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.193354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.193382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.193610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.193637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.193795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.193822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.194067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.194095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.194259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.194286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.194496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.194522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.194762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.194792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.194978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.195006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.195179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.195206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.195386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.195413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.195586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.195612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.195916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.195944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.196123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.196150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.196413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.196443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.196771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.196827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.197068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.197095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.197346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.197387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.197641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.197694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.197971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.197998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.198152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.198180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.198997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.199024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.199180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.199220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.199416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.199442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.199652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.199694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.199876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.199907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.200111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.200153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.200332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.200358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.200531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.200557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.200771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.200797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.200954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.200982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.201173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.201200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.201411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.201438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.201640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.201666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.201887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.201915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.202070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.202097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.202281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.202307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.202484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.202511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.383 qpair failed and we were unable to recover it. 00:34:33.383 [2024-07-14 01:20:22.202774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.383 [2024-07-14 01:20:22.202800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.203034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.203061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.203222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.203248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.203446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.203472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.203679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.203706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.203883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.203910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.204089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.204116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.204338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.204365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.204561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.204588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.204783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.204810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.205013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.205041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.205240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.205271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.205519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.205546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.205735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.205763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.206008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.206036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.206199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.206225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.206432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.206459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.206633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.206660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.206847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.206894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.207043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.207071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.207277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.207303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.207462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.207489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.207636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.207662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.207880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.207908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.208088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.208116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.208301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.208328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.208544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.208570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.208762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.208792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.208971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.208999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.209197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.209224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.209417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.209445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.209595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.209621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.209819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.209860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.210052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.210079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.210247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.210274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.210459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.210486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.210635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.210680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.210898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.210940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.211179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.211206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.211393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.211420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.211623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.384 [2024-07-14 01:20:22.211649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.384 qpair failed and we were unable to recover it. 00:34:33.384 [2024-07-14 01:20:22.211886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.211917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.212129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.212156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.212383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.212424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.212634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.212675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.212909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.212951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.213123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.213149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.213338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.213365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.213524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.213550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.213731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.213758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.213935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.213962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.214138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.214179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.214329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.214355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.214566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.214596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.214813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.214840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.215042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.215069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.215273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.215303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.215516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.215542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.215751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.215778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.216001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.216031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.216244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.216271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.216452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.216479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.216705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.216735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.216964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.216992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.217179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.217206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.217391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.217419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.217635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.217662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.217800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.217831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.218022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.218049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.218316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.218342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.218542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.218569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.218803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.218833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.219059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.219086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.219256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.385 [2024-07-14 01:20:22.219284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.385 qpair failed and we were unable to recover it. 00:34:33.385 [2024-07-14 01:20:22.219464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.219491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.219665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.219706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.219869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.219896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.220054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.220082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.220272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.220298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.220504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.220531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.220729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.220756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.221030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.221057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.221295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.221322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.221467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.221494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.221682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.221708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.221900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.221929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.222140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.222184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.222371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.222401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.222624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.222651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.222821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.222852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.223078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.223108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.223307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.223334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.223510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.223537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.223726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.223752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.223971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.223998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.224181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.224208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.224394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.224420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.224639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.224680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.224873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.224900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.225154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.225180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.225375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.225401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.225633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.225662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.225861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.225896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.226097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.226124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.226326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.226353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.226542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.226586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.226754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.226785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.226955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.226987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.227182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.227208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.227397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.227424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.227572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.227599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.227809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.227839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.228038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.228066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.228244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.228271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.386 [2024-07-14 01:20:22.228435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.386 [2024-07-14 01:20:22.228462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.386 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.228644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.228670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.228861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.228902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.229121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.229150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.229395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.229422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.229606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.229633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.229828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.229857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.230095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.230122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.230327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.230354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.230585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.230614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.230833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.230860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.231067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.231097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.231328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.231357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.231555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.231583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.231779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.231809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.232004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.232034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.232251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.232278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.232523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.232550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.232750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.232777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.232959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.232986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.233172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.233199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.233396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.233425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.233623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.233650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.233824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.233851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.234011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.234038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.234221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.234248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.234487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.234514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.234661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.234688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.234871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.234898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.235123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.235153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.235377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.235407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.235639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.235666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.235894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.235924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.236091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.236126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.236317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.236344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.236527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.236554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.236782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.236812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.237012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.237040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.237263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.237293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.237517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.237547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.237761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.387 [2024-07-14 01:20:22.237791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.387 qpair failed and we were unable to recover it. 00:34:33.387 [2024-07-14 01:20:22.237965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.237993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.238169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.238196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.238368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.238394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.238547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.238575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.238756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.238783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.238940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.238967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.239166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.239196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.239391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.239421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.239613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.239640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.239820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.239847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.240002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.240029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.240207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.240235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.240461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.240492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.240734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.240763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.240949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.240977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.241179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.241209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.241407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.241437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.241631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.241658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.241883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.241913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.242097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.242142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.242351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.242379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.242565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.242592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.242804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.242832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.243057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.243084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.243265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.243295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.243515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.243542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.243714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.243741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.243895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.243923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.244156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.244185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.244360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.244388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.244553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.244584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.244747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.244777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.244950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.244982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.245180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.245210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.245428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.245457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.245639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.245666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.245815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.245859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.246057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.246086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.246310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.246337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.246536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.246565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.246740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.388 [2024-07-14 01:20:22.246769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.388 qpair failed and we were unable to recover it. 00:34:33.388 [2024-07-14 01:20:22.247009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.247036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.247209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.247239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.247554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.247614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.247838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.247869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.248087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.248116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.248323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.248352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.248548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.248575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.248803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.248832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.249000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.249032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.249260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.249287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.249507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.249537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.249731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.249760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.249939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.249967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.250162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.250192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.250390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.250419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.250645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.250671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.250922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.250949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.251102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.251128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.251320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.251348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.251497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.251524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.251671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.251697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.251877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.251905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.252080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.252107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.252302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.252331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.252522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.252549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.252703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.252730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.252912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.252939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.253140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.253167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.253372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.253401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.253708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.253766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.253994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.254021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.254217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.254252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.254448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.254475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.254628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.254655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.254883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.254913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.255118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.255147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.255346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.255373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.255567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.255596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.255790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.255819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.256029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.389 [2024-07-14 01:20:22.256056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.389 qpair failed and we were unable to recover it. 00:34:33.389 [2024-07-14 01:20:22.256205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.256233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.256433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.256463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.256663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.256690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.256890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.256920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.257141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.257170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.257391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.257417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.257596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.257622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.257813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.257840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.258044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.258070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.258224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.258251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.258566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.258618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.258843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.258875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.259034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.259061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.259264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.259290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.259499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.259525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.259752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.259781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.259973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.260004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.260179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.260206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.260407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.260438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.260693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.260722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.260961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.260988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.261144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.261171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.261318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.261344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.261544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.261570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.261801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.261830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.262043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.262069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.262249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.262276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.262470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.262499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.262694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.262724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.262948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.262975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.263208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.263235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.263408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.263439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.263613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.263638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.263838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.263873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.390 [2024-07-14 01:20:22.264046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.390 [2024-07-14 01:20:22.264076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.390 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.264303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.264329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.264559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.264588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.264801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.264830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.265031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.265058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.265221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.265249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.265469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.265498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.265692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.265717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.265941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.265971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.266164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.266193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.266398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.266424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.266639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.266668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.266887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.266916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.267111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.267138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.267290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.267317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.267539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.267567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.267738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.267764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.267992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.268022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.268245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.268274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.268471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.268498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.268693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.268723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.268891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.268921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.269146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.269172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.269374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.269403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.269606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.269633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.269859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.269895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.270086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.270113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.270309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.270339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.270504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.270530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.270758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.270787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.271010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.271039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.271210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.271236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.271405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.271434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.271650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.271679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.271901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.271927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.272122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.272152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.272373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.272402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.272602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.272632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.272856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.272890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.273058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.273089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.273288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.273315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.391 [2024-07-14 01:20:22.273543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.391 [2024-07-14 01:20:22.273572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.391 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.273762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.273791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.273988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.274015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.274170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.274196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.274419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.274449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.274645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.274671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.274845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.274884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.275104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.275133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.275329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.275356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.275558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.275589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.275756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.275786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.276009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.276036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.276236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.276266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.276486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.276515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.276720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.276747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.276919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.276946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.277169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.277198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.277393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.277420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.277620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.277650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.277820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.277849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.278079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.278105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.278315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.278344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.278539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.278569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.278753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.278779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.278972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.279001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.279218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.279248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.279486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.279513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.279726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.279756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.279984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.280011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.280220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.280247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.280446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.280476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.280662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.280691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.280885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.280912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.281106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.281135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.281319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.281348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.281566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.281592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.281781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.281815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.282040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.282069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.282268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.282294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.282516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.282545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.282738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.282767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.392 qpair failed and we were unable to recover it. 00:34:33.392 [2024-07-14 01:20:22.282995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.392 [2024-07-14 01:20:22.283022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.283223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.283252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.283448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.283478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.283677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.283703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.283929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.283959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.284183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.284209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.284379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.284406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.284607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.284636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.284851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.284898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.285100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.285126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.285278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.285305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.285533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.285563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.285749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.285776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.285959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.285987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.286204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.286233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.286430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.286457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.286666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.286696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.286857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.286891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.287085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.287111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.287303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.287332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.287517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.287546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.287713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.287739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.287930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.287960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.288165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.288194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.288389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.288415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.288589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.288617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.288790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.288819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.289019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.289046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.289225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.289252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.289425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.289453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.289648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.289673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.289876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.289919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.290063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.290089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.290277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.290303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.290479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.290510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.290705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.290733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.290970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.290996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.291191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.291219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.291410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.291440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.291614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.291641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.291874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.393 [2024-07-14 01:20:22.291913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.393 qpair failed and we were unable to recover it. 00:34:33.393 [2024-07-14 01:20:22.292091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.292120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.292351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.292377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.292604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.292633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.292823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.292853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.293078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.293104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.293260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.293287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.293509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.293538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.293739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.293765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.293947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.293974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.294158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.294187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.294386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.294412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.294561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.294588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.294813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.294843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.295046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.295072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.295283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.295312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.295493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.295522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.295723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.295750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.295929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.295957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.296152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.296184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.296385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.296411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.296608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.296637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.296831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.296875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.297078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.297105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.297291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.297320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.297520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.297550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.297747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.297774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.297968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.297998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.298169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.298197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.298400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.298427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.298578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.298605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.298757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.298783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.298963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.298990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.299212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.299240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.299566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.299617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.299840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.299872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.394 [2024-07-14 01:20:22.300077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.394 [2024-07-14 01:20:22.300107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.394 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.300300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.300329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.300518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.300544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.300696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.300722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.300889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.300916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.301118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.301145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.301369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.301398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.301603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.301632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.301827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.301852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.302058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.302088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.302311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.302339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.302526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.302552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.302778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.302807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.302989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.303019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.303210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.303237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.303461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.303491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.303711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.303740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.303966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.303993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.304200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.304229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.304426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.304455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.304653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.304679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.304879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.304908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.305077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.305105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.305306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.305332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.305508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.305535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.305717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.305746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.305943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.305973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.306173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.306202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.306399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.306429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.306666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.306693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.306924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.306953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.307186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.307212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.307426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.307451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.307683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.307712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.307922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.307950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.308131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.308158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.308309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.308335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.308495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.308520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.308695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.308721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.308898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.308928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.309117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.309146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.395 qpair failed and we were unable to recover it. 00:34:33.395 [2024-07-14 01:20:22.309340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.395 [2024-07-14 01:20:22.309366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.309612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.309639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.309842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.309876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.310085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.310112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.310259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.310285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.310479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.310508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.310730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.310757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.310990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.311020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.311244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.311274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.311499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.311525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.311732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.311764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.311982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.312012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.312221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.312248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.312416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.312445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.312635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.312664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.312864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.312894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.313062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.313093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.313288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.313318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.313539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.313566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.313797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.313823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.314006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.314033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.314242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.314268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.314438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.314467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.314657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.314687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.314885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.314912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.315135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.315169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.315413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.315443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.315664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.315690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.315931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.315959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.316106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.316132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.316372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.316399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.316653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.316705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.316924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.316953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.317114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.317144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.317360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.317387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.317605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.317634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.317792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.317823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.318014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.318044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.318267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.318294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.318610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.318640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.318836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.318873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.396 qpair failed and we were unable to recover it. 00:34:33.396 [2024-07-14 01:20:22.319094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.396 [2024-07-14 01:20:22.319124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.319316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.319343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.319626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.319679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.319876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.319905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.320084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.320111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.320261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.320288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.320571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.320623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.320811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.320840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.321040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.321067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.321253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.321279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.321531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.321561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.321760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.321789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.322012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.322038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.322211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.322238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.322409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.322436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.322659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.322689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.322883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.322913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.323085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.323111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.323268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.323294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.323504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.323547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.323777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.323803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.324001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.324028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.324281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.324335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.324554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.324583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.324792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.324822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.325011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.325037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.325235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.325292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.325512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.325541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.325762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.325791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.325990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.326017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.326291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.326345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.326548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.326578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.326774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.326803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.327005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.327032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.327212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.327239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.327435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.327463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.327657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.327685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.327901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.327943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.328114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.328157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.328356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.328386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.328577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.397 [2024-07-14 01:20:22.328606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.397 qpair failed and we were unable to recover it. 00:34:33.397 [2024-07-14 01:20:22.328851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.328882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.329083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.329112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.329330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.329360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.329526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.329554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.329725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.329751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.329950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.329981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.330176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.330206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.330368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.330397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.330569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.330596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.330816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.330845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.331060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.331088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.331247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.331276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.331478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.331505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.331686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.331712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.331944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.331974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.332146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.332175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.332375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.332400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.332655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.332706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.332903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.332933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.333152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.333178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.333351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.333377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.333698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.333756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.333983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.334012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.334215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.334249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.334428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.334454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.334625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.334655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.334849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.334884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.335086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.335116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.335317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.335344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.335570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.335599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.335754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.335784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.335979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.336006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.336182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.336208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.336401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.336431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.336627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.398 [2024-07-14 01:20:22.336656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.398 qpair failed and we were unable to recover it. 00:34:33.398 [2024-07-14 01:20:22.336823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.336853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.337064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.337091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.337304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.337331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.337538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.337567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.337758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.337787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.337973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.338000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.338225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.338254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.338449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.338479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.338701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.338731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.338937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.338964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.339140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.339166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.339370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.339399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.339595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.339623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.339845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.339875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.340054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.340082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.340269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.340298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.340487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.340516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.340677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.340702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.340853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.340902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.341109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.341138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.341352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.341381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.341599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.341625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.341839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.341873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.342043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.342074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.342299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.342328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.342507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.342532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.342705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.342732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.342927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.342956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.343162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.343196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.343392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.343418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.343698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.343753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.343947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.343978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.344196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.344225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.344386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.344413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.344567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.344593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.344786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.344814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.344978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.345007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.345208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.345234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.345453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.345482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.345704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.345733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.399 [2024-07-14 01:20:22.345933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.399 [2024-07-14 01:20:22.345963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.399 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.346161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.346187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.346450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.346502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.346719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.346748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.346973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.347004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.347197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.347223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.347465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.347518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.347709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.347738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.347932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.347962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.348164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.348191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.348426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.348478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.348710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.348739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.348937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.348967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.349170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.349196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.349379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.349404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.349551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.349577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.349773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.349802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.350035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.350062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.350249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.350276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.350496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.350525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.350755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.350784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.350986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.351013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.351229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.351298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.351520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.351549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.351743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.351771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.352004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.352031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.352216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.352243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.352434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.352463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.352695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.352725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.352904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.352931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.353137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.353166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.353354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.353383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.353599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.353628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.353828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.353855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.354105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.354134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.354312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.354339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.354561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.354590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.354803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.354830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.355040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.355070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.355266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.355295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.355487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.355516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.400 qpair failed and we were unable to recover it. 00:34:33.400 [2024-07-14 01:20:22.355744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.400 [2024-07-14 01:20:22.355770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.355987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.356014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.356212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.356238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.356478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.356507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.356736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.356762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.356992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.357022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.357188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.357216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.357410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.357439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.357609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.357634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.357806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.357836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.358045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.358072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.358299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.358328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.358545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.358571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.358721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.358747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.358902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.358946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.359137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.359167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.359334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.359360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.359583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.359612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.359802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.359830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.360054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.360083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.360313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.360338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.360677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.360730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.360895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.360925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.361090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.361118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.361317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.361343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.361688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.361755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.361963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.361993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.362191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.362225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.362425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.362451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.362763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.362824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.363037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.363064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.363256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.363285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.363458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.363485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.363665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.363691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.363917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.363948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.364140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.364170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.364388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.364415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.364722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.364778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.364987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.365013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.365258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.365287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.365514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.365540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.401 qpair failed and we were unable to recover it. 00:34:33.401 [2024-07-14 01:20:22.365757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.401 [2024-07-14 01:20:22.365785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.366006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.366036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.366256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.366285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.366487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.366513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.366763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.366789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.366971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.366997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.367191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.367219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.367421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.367447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.367752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.367819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.368036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.368063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.368285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.368315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.368512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.368539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.368703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.368731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.368966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.368993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.369165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.369209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.369414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.369440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.369613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.369640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.369841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.369885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.370080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.370111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.370334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.370360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.370639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.370668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.370836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.370871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.371093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.371119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.371324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.371350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.371612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.371642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.371847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.371883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.372054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.372087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.372269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.372295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.372443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.372470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.372693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.372722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.372886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.372917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.373108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.373134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.373314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.373340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.373562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.373591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.373794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.373823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.402 [2024-07-14 01:20:22.374013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.402 [2024-07-14 01:20:22.374041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.402 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.374220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.374246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.374487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.374514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.374700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.374726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.374886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.374913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.375067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.375093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.375290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.375319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.375506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.375535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.375693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.375719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.375891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.375921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.376148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.376174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.376379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.376408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.376577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.376603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.376779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.376806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.377032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.377062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.377234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.377263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.377489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.377516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.377719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.377748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.377987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.378014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.378163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.378189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.378366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.378392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.378590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.378620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.378782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.378812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.379003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.379034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.379237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.379264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.379566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.379619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.379837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.379874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.380073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.380102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.380279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.380306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.380458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.380484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.380662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.380688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.380860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.380901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.381079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.381106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.381384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.381443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.381636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.381667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.381893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.381923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.382167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.382194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.382419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.382448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.382669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.382699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.382871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.382901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.383103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.403 [2024-07-14 01:20:22.383129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.403 qpair failed and we were unable to recover it. 00:34:33.403 [2024-07-14 01:20:22.383304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.383330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.383511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.383537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.383741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.383770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.383983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.384010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.384239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.384268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.384432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.384460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.384655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.384685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.384915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.384942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.385142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.385171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.385362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.385390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.385595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.385623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.385816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.385844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.386073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.386099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.386307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.386336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.386531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.386561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.386779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.386808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.387000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.387027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.387235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.387264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.387437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.387464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.387662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.387687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.387891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.387921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.388116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.388144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.388338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.388369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.388592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.388618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.388791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.388820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.389027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.389054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.389210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.389236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.389410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.389436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.389632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.389661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.389894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.389921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.390077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.390107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.390286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.390313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.390541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.390570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.390760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.390788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.391006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.391036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.391236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.391262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.391530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.391581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.391795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.391824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.392017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.392045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.392265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.392291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.392649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.404 [2024-07-14 01:20:22.392714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.404 qpair failed and we were unable to recover it. 00:34:33.404 [2024-07-14 01:20:22.392935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.392964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.393127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.393157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.393360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.393387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.393714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.393768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.393967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.393993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.394176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.394203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.394387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.394413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.394616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.394641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.394814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.394842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.395076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.395104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.395277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.395303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.395532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.395590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.395779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.395809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.396013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.396042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.396233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.396259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.396604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.396663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.396886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.396915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.397086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.397115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.397315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.397342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.397675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.397737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.397957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.397987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.398193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.398219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.398396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.398423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.398782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.398839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.399084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.399111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.399330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.399356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.399534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.399560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.399756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.399785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.399982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.400009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.400230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.400264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.400462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.400488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.400640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.400665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.400862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.400897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.401092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.401118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.401317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.401343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.401609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.401659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.401892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.401921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.402144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.402174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.402348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.402374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.402523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.405 [2024-07-14 01:20:22.402549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.405 qpair failed and we were unable to recover it. 00:34:33.405 [2024-07-14 01:20:22.402772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.402802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.402989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.403018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.403220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.403247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.403443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.403471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.403664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.403694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.403885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.403915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.404108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.404134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.404315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.404341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.404507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.404533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.404701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.404730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.404950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.404977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.405220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.405246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.405466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.405495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.405695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.405721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.405933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.405960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.406196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.406249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.406413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.406444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.406635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.406663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.406889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.406915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.407090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.407118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.407317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.407346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.407572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.407600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.407797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.407823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.408036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.408065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.408254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.408283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.408515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.408541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.408720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.408746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.408944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.408974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.409204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.409230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.409457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.409486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.409708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.409735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.409915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.409942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.410154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.410183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.410350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.410378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.410556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.410582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.410775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.410805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.411001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.411027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.411250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.411278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.411446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.411473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.411793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.411863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.412101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.406 [2024-07-14 01:20:22.412130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.406 qpair failed and we were unable to recover it. 00:34:33.406 [2024-07-14 01:20:22.412323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.412353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.412556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.412583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.412783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.412813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.413033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.413060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.413262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.413293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.413496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.413522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.413699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.413727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.413915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.413945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.414134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.414163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.414384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.414410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.414730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.414798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.415011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.415041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.415240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.415269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.415447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.415474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.415733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.415786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.415964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.415995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.416185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.416214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.416414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.416440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.416617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.416645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.416874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.416903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.417102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.417131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.417323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.417349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.417686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.417736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.417940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.417969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.418164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.418193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.418416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.418442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.418767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.418819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.419026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.419052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.419254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.419283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.419463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.419490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.419759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.419810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.420035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.420062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.420238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.420267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.420466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.420493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.420758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.407 [2024-07-14 01:20:22.420808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.407 qpair failed and we were unable to recover it. 00:34:33.407 [2024-07-14 01:20:22.421034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.421061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.421235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.421261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.421433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.421460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.421787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.421831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.422067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.422093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.422296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.422325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.422522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.422549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.422752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.422781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.422974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.423004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.423230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.423259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.423432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.423459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.423684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.423713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.423899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.423927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.424125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.424156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.424327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.424355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.424548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.424577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.424797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.424823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.425007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.425035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.425209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.425235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.425493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.425547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.425752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.425786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.426007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.426037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.426218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.426244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.426514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.426566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.426795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.426821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.427024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.427050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.427244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.427271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.427517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.427568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.427786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.427813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.428052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.428088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.428295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.428322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.428472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.428499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.428719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.428749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.428918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.428948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.429150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.429177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.429375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.429404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.429624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.429654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.429827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.429856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.430058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.430086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.430379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.430447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.408 [2024-07-14 01:20:22.430648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.408 [2024-07-14 01:20:22.430675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.408 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.430876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.430906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.431079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.431106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.431346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.431397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.431593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.431622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.431785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.431814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.432027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.432055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.432215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.432243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.432411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.432437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.432664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.432694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.432875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.432902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.433081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.433108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.433309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.433338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.433557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.433584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.433760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.433787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.433967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.433998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.434204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.434231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.434399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.434428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.434630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.434656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.434881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.434911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.435080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.435117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.435341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.435368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.435547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.435574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.435741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.435771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.435992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.436019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.436213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.436242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.436411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.436437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.436744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.436810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.437013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.437042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.437236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.437267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.437441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.437468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.437646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.437673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.437888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.437914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.438090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.438118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.438279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.438305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.438482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.438509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.438708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.438735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.438884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.438910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.439074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.439101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.439251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.439278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.409 [2024-07-14 01:20:22.439455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.409 [2024-07-14 01:20:22.439481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.409 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.439655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.439680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.439864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.439895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.440044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.440071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.440251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.440278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.440455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.440481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.440631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.440657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.440836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.440863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.441046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.441074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.441243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.441270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.441443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.441470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.441647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.441673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.441850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.441886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.442064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.442092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.442239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.442266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.442467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.442493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.442665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.442693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.442860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.442899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.443077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.443104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.443310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.443337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.443516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.443546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.443716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.443743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.443922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.443950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.444150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.444176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.444387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.444416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.444634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.444662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.444860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.444890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.445066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.445093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.445243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.445270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.445472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.445498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.445647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.445673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.445883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.445909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.446086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.446113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.446317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.446344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.446519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.446545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.446723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.446749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.446904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.446930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.447113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.447139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.447308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.447334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.447537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.447564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.447797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.447827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.410 [2024-07-14 01:20:22.448028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.410 [2024-07-14 01:20:22.448058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.410 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.448252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.448279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.448621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.448678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.448875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.448905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.449135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.449162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.449317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.449344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.449549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.449575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.449753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.449779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.449932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.449959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.450106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.450133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.450309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.450335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.450539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.450565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.450760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.450789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.450994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.451022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.451170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.451197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.451375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.451402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.451627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.451655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.451878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.451905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.452137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.452166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.452361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.452391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.452596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.452622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.452809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.452835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.453020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.453047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.453277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.453306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.453526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.453555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.453717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.453743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.453895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.453921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.454131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.454160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.454354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.454384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.454584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.454611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.454789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.454815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.454991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.455017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.455208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.455237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.455464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.455491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.455668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.411 [2024-07-14 01:20:22.455711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.411 qpair failed and we were unable to recover it. 00:34:33.411 [2024-07-14 01:20:22.455904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.455934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.456137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.456167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.456382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.456408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.456610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.456637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.456786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.456812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.456989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.457017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.457171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.457197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.457350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.457377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.457551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.457577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.457750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.457776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.457960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.457987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.458162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.458189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.458362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.458388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.458558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.458584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.458786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.458816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.459036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.459063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.459270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.459299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.459493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.459523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.459742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.459769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.459970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.459997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.460176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.460202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.460403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.460448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.460623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.460649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.460823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.460850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.461014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.461045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.461222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.461249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.461426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.461452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.461650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.461677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.461854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.461893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.462131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.462161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.462342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.462370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.462554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.462580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.462781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.462808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.463014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.463040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.463214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.463240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.463381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.463408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.412 [2024-07-14 01:20:22.463584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.412 [2024-07-14 01:20:22.463610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.412 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.463786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.463811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.464001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.464027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.464206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.464233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.464425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.464454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.464622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.464651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.464839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.464872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.465075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.465104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.465293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.465321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.465513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.465543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.465767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.465794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.465997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.466024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.466223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.466252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.466446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.466476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.466708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.466734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.466955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.466985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.467192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.467218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.467425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.467451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.467626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.467652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.467805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.467831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.468017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.468045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.468272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.468302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.468509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.468536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.468740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.468767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.468957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.468984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.469134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.469161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.469342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.469368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.469544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.469570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.469767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.469798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.470006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.470032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.470250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.470277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.470486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.470512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.470692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.470732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.470914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.470944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.471145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.471172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.471477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.471533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.471762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.471791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.471978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.472008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.472222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.472248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.472397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.472424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.472597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.472624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.413 [2024-07-14 01:20:22.472797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.413 [2024-07-14 01:20:22.472824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.413 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.473022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.473048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.473247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.473275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.473495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.473523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.473741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.473770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.473977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.474003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.474200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.474230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.474449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.474479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.474715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.474744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.474918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.474945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.475123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.475151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.475385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.475411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.475594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.475620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.475794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.475819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.475980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.476007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.476180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.476207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.476352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.476378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.476518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.476544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.476724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.476751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.476934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.476961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.477154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.477182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.477358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.477384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.477563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.477589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.477788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.477817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.478046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.478073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.478283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.478310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.478505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.478534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.478738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.478770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.478988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.479017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.479192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.479219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.479421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.479447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.479649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.479679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.479834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.479863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.480062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.480088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.480287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.480314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.480503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.480530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.480758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.480786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.480988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.481015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.481171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.481197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.481395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.481424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.481592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.481620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.414 qpair failed and we were unable to recover it. 00:34:33.414 [2024-07-14 01:20:22.481846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.414 [2024-07-14 01:20:22.481885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.482082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.482113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.482295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.482321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.482522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.482548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.482788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.482814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.482968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.482994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.483173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.483199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.483400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.483425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.483623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.483650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.483803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.483829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.484015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.484042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.484253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.484279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.484478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.484503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.485854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.485896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.486142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.486172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.486362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.486392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.486618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.486644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.486824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.486850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.487063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.487093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.487288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.487317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.487505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.487530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.487739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.487765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.487916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.487943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.488088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.488116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.488326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.488353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.488533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.488559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.488737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.488767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.488997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.489023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.489174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.489201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.489381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.489408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.489589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.489620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.489780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.489808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.489979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.490006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.490160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.415 [2024-07-14 01:20:22.490186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.415 qpair failed and we were unable to recover it. 00:34:33.415 [2024-07-14 01:20:22.490363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.490388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.490545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.490571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.490772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.490798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.490998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.491029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.491228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.491257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.491420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.491449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.491677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.491703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.491886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.491912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.492095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.492121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.492322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.492351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.492524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.492551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.492711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.492739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.492922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.492948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.493164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.493193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.493367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.493393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.493569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.493596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.493834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.493863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.494066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.494094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.494318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.494344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.494523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.494548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.494720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.494746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.494943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.494972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.495147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.495173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.495350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.495376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.495525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.495550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.495726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.495752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.497114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.497149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.497351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.497382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.497574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.497603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.497831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.497860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.498064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.498091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.498275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.498302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.498504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.498539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.498731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.498761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.498964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.498993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.499221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.499273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.499490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.499519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.499739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.499768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.499974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.500001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.500160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.416 [2024-07-14 01:20:22.500187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.416 qpair failed and we were unable to recover it. 00:34:33.416 [2024-07-14 01:20:22.500414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.500443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.500667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.500697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.500902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.500929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.501108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.501135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.501281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.501307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.501464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.501491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.501679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.501706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.501880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.501907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.502087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.502114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.502337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.502366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.502598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.502624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.502807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.502834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.503024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.503051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.503201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.503227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.503372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.503398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.503592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.503622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.503817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.503847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.504056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.504086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.504282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.504309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.504506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.504555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.504752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.504778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.504954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.504982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.505137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.505164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.505472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.505539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.505746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.505773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.505991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.506022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.506222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.506249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.506450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.506476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.506624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.506650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.506847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.506882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.507056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.507082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.507303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.507329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.507529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.507562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.507783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.507813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.507994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.508021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.508160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.508187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.508389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.508418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.508604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.508633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.508830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.508856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.509061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.509088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.509321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.417 [2024-07-14 01:20:22.509350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.417 qpair failed and we were unable to recover it. 00:34:33.417 [2024-07-14 01:20:22.509549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.509578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.509819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.509848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.510070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.510096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.510295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.510325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.510548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.510578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.510768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.510795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.510978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.511005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.511239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.511268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.511475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.511504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.511674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.511700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.511872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.511901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.512092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.512121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.512338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.512367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.512541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.512567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.512764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.512790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.512970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.512996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.513149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.513175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.513381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.513407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.513708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.513772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.514001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.514028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.514205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.514236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.514437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.514463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.514617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.514645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.514812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.514837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.515010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.515037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.515215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.515241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.515395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.515423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.515599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.515625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.515829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.515859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.516057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.516084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.516279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.516330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.516496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.516530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.516726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.516754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.516929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.516956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.517136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.517162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.517342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.517372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.517576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.517603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.517788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.517815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.517998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.518027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.518222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.518265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.518438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.418 [2024-07-14 01:20:22.518464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.418 qpair failed and we were unable to recover it. 00:34:33.418 [2024-07-14 01:20:22.518672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.518698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.518899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.518941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.519112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.519139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.519314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.519341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.519499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.519525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.520338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.520372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.520597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.520627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.520797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.520826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.521029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.521056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.521236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.521262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.521417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.521442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.521600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.521628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.522450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.522484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.523455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.523489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.523713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.523743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.523941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.523971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.524137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.524164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.524337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.524393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.524580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.524609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.524824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.524854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.525061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.525087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.525290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.525318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.525479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.525507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.525720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.525768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.525984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.526011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.526171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.526198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.526360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.526386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.526589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.526642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.526836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.526870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.527068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.527094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.527316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.527362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.527661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.527718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.527983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.528009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.528162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.528188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.528347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.528375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.528587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.528620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.528807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.528835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.529011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.529037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.529185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.529211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.529453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.529479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.419 [2024-07-14 01:20:22.529654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.419 [2024-07-14 01:20:22.529680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.419 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.529892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.529918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.530092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.530118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.530301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.530327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.530559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.530589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.530763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.530790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.530939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.530966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.531116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.531142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.531316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.531342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.531548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.531574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.531774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.531803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.531984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.532011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.532182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.532212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.532384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.532412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.532596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.532622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.532797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.532824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.533021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.533049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.533190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.533219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.533410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.533436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.533583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.533610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.533766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.533791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.533963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.533989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.534138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.534164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.534332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.534360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.534565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.534591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.534795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.534824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.535003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.535030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.535182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.535209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.535365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.535392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.535553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.535579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.535769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.535798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.536018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.536045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.536202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.536229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.536431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.536457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.536606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.536632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.420 [2024-07-14 01:20:22.536809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.420 [2024-07-14 01:20:22.536835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.420 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.536997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.537024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.537200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.537226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.537397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.537426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.537623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.537652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.537816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.537847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.538041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.538068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.538262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.538291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.538472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.538517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.538726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.538757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.538973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.539001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.539154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.539180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.539363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.539390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.539555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.539584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.539760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.539786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.539942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.539969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.540114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.540140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.540349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.540380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.540577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.540604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.540775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.540801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.540954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.540981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.541130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.541157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.541332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.541362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.541547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.541573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.541752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.541781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.541955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.541981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.542128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.542154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.542366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.542392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.542566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.542592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.542750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.542776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.542969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.542995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.543145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.543171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.543375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.543400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.543581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.543607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.543840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.543873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.544030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.544056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.544264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.544294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.545098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.545130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.545312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.545339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.545487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.545513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.545668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.545711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.421 [2024-07-14 01:20:22.545931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.421 [2024-07-14 01:20:22.545958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.421 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.546116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.546142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.546344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.546372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.546581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.546607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.546772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.546801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.546975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.547002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.547154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.547180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.547413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.547442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.547639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.547668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.547905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.547931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.548088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.548114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.548287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.548316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.548518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.548544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.548742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.548773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.548971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.548998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.549150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.549178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.549333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.549359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.549531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.549557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.549748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.549777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.549958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.549985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.550138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.550164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.550314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.550341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.550525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.550551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.550723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.550749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.550942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.550983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.551143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.551171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.551523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.551575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.551781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.551825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.551992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.552019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.552174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.552201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.552374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.552418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.552657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.552702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.552877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.552905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.553057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.553083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.553275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.553319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.553523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.553567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.553779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.553806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.553961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.553990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.554178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.554223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.554432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.554476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.554790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.554830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.555007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.422 [2024-07-14 01:20:22.555036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.422 qpair failed and we were unable to recover it. 00:34:33.422 [2024-07-14 01:20:22.555220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.555249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.555454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.555482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.555686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.555715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.555950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.555977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.556129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.556155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.556350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.556378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.556603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.556650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.556883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.556927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.557077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.557103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.557306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.557334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.557584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.557631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.557838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.557873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.558039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.558065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.558225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.558254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.558500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.558550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.558746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.558774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.558974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.559001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.559158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.559184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.559403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.559431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.559647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.559676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.559838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.559874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.560056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.560081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.560234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.560259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.560466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.560494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.560653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.560682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.560849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.560891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.561058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.561084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.561272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.561300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.561463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.561492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.561690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.561737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.561942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.561969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.562115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.562157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.562351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.562380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.562596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.562625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.562815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.562845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.563022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.563048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.563203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.563246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.563437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.563466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.563694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.563742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.563969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.563996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.564136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.423 [2024-07-14 01:20:22.564178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.423 qpair failed and we were unable to recover it. 00:34:33.423 [2024-07-14 01:20:22.564377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.564403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.564651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.564703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.564909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.564936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.565114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.565140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.565325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.565353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.565547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.565576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.565765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.565794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.565971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.565997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.566192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.566221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.566426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.566455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.566643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.566672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.566846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.566881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.567072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.567097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.567268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.567294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.567524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.567552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.567754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.567783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.567989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.568015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.568163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.568189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.568409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.568438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.568623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.568652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.568810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.568842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.569062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.569103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.569313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.569357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.569532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.569578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.569757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.569784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.569963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.569990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.570171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.570215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.570449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.570493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.570646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.570673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.570879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.570906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.571083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.571128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.571337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.571382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.571568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.571613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.571813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.571840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.572025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.572069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.424 qpair failed and we were unable to recover it. 00:34:33.424 [2024-07-14 01:20:22.572305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.424 [2024-07-14 01:20:22.572349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.572534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.572582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.572787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.572815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.573006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.573052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.573223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.573269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.573476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.573506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.573693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.573720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.574019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.574065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.574280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.574308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.574546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.574592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.574771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.574799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.575010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.575055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.575263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.575307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.575510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.575560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.575712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.575738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.575936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.575982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.576184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.576228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.576455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.576501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.576674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.576701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.576884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.576912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.577079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.577124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.577327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.577371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.577597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.577645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.577829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.577856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.578055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.578082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.578282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.578329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.578501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.578544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.425 qpair failed and we were unable to recover it. 00:34:33.425 [2024-07-14 01:20:22.578704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.425 [2024-07-14 01:20:22.578731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.578928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.578973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.579158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.579204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.579376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.579421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.579664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.579712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.579871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.579898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.580070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.580116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.580353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.580397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.580646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.580690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.580873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.580900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.581054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.581081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.581249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.581293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.581446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.581474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.581676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.581722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.581873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.581900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.582072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.582118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.582346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.582391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.582603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.582648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.582827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.582854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.583052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.583098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.583305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.583350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.583556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.583600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.583780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.583806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.584004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.584050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.584223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.584268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.584476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.584519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.584697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.584723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.584925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.584956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.585153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.585197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.585394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.426 [2024-07-14 01:20:22.585438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.426 qpair failed and we were unable to recover it. 00:34:33.426 [2024-07-14 01:20:22.585621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.585649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.585802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.585829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.586038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.586084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.586285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.586330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.586560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.586604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.586807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.586834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.587023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.587069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.587272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.587302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.587485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.587533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.587678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.587705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.587849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.587881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.588067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.588117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.588329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.588374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.588579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.588626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.588799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.588826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.589011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.589056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.589221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.589265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.589480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.589524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.589759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.589806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.589965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.589992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.590167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.590213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.590383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.590426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.590629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.590674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.590850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.590883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.591069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.591114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.591319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.591363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.591558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.591587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.591784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.591811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.592001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.592047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.427 [2024-07-14 01:20:22.592248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.427 [2024-07-14 01:20:22.592292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.427 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.592487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.592517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.592707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.592734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.592933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.592979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.593161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.593207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.593440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.593484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.593662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.593693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.593903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.593948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.594105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.594135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.594317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.594345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.594536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.594565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.594770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.594799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.594990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.595017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.595187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.595216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.595416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.595458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.595659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.595689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.595858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.595918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.596070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.596097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.596266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.596295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.596490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.596519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.596740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.596787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.596986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.597013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.597214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.597243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.597405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.597434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.597688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.597734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.597943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.597970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.598124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.598151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.598326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.598355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.598598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.598646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.598877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.598903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.428 qpair failed and we were unable to recover it. 00:34:33.428 [2024-07-14 01:20:22.599062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.428 [2024-07-14 01:20:22.599089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.599290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.599319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.599520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.599571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.599795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.599825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.600011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.600038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.600214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.600243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.600466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.600494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.600656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.600685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.600858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.600890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.601038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.601064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.601288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.601334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.601559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.601606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.601765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.601794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.601960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.601986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.602147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.602173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.602406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.602452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.602621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.602649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.602831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.602857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.603026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.603052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.603221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.603249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.603449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.603492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.603660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.603688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.603854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.603885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.604054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.604080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.604282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.604310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.604521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.604549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.604771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.604800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.604987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.605014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.605160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.605186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.605386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.605415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.605636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.605687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.429 [2024-07-14 01:20:22.605891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.429 [2024-07-14 01:20:22.605918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.429 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.606069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.606095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.606318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.606347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.606568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.606615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.606810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.606839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.607017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.607043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.607222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.607251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.607493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.607535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.607767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.607796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.607975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.608001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.608149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.608192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.608404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.608449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.608707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.608739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.608941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.608967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.609123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.609165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.609381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.609428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.609630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.609656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.609835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.609861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.610032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.610060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.610218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.610247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.610442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.610468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.610663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.610692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.610861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.610896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.611056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.611084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.611287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.611313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.430 [2024-07-14 01:20:22.611484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.430 [2024-07-14 01:20:22.611513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.430 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.611732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.611764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.611933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.611962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.612141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.612167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.612312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.612354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.612555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.612584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.612789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.612814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.612975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.613001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.613178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.613207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.613382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.613409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.613564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.613608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.613799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.613825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.613984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.614011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.614160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.614204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.614422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.614470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.614645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.614671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.614820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.614845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.615017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.615059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.615274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.615304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.615511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.615539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.615729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.615776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.616009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.616040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.616204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.616234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.616429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.616456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.616640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.616668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.616840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.616877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.617053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.617079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.617227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.617253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.617432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.617467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.617702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.617752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.617962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.431 [2024-07-14 01:20:22.617989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.431 qpair failed and we were unable to recover it. 00:34:33.431 [2024-07-14 01:20:22.618133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.618160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.618336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.618361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.618579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.618609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.618776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.618804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.618998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.619025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.619174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.619201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.619354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.619382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.619575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.619604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.619800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.619828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.620012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.620038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.620214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.620243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.620444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.620470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.620626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.620653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.620828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.620854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.621014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.621040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.621186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.621213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.621399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.621425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.621624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.621650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.621849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.621884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.622061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.622087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.622258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.622284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.622462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.622488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.622755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.622803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.622987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.623015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.623176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.623202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.623376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.623403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.623542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.623569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.623767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.623796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.623964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.623992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.624146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.432 [2024-07-14 01:20:22.624172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.432 qpair failed and we were unable to recover it. 00:34:33.432 [2024-07-14 01:20:22.624382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.624412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.624587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.624616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.624814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.624840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.624995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.625023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.625193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.625221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.625415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.625444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.625670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.625696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.625870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.625901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.626062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.626088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.626286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.626315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.626544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.626570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.626719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.626745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.626918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.626962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.627141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.627167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.627352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.627379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.627520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.627546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.627730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.627758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.627969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.627997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.628195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.628222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.628371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.628397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.628598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.628627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.628830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.628856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.629023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.629049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.629199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.629226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.629401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.629445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.629609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.629638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.629833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.629862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.630033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.630059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.630282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.433 [2024-07-14 01:20:22.630311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.433 qpair failed and we were unable to recover it. 00:34:33.433 [2024-07-14 01:20:22.630539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.630569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.630792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.630818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.630972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.630999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.631200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.631229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.631401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.631429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.631639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.631665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.631885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.631911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.632113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.632141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.632311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.632342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.632532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.632558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.632729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.632757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.632969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.632999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.633226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.633253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.633429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.633455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.633633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.633660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.633886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.633915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.634085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.634114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.634320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.634347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.634548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.634578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.634806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.634835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.635072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.635119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.635328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.635355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.635563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.635591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.635795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.635825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.636059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.636086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.636255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.636282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.636456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.636483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.636710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.636739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.636936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.636963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.637149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.434 [2024-07-14 01:20:22.637176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.434 qpair failed and we were unable to recover it. 00:34:33.434 [2024-07-14 01:20:22.637352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.637378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.637577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.637606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.637809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.637836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.638029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.638057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.638237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.638264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.638465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.638495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.638700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.638727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.638919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.638950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.639144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.639174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.639328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.639358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.639572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.639602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.639789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.639819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.640563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.640597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.640790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.640818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.641014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.641042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.641205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.641233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.641432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.641459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.641616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.641642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.641838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.641879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.642106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.642133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.642322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.642349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.642525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.642552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.642755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.642784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.642966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.642993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.643145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.643178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.643454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.643497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.643725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.643752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.643942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.643972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.644168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.644199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.435 qpair failed and we were unable to recover it. 00:34:33.435 [2024-07-14 01:20:22.644399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.435 [2024-07-14 01:20:22.644426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.644628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.644658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.644848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.644883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.645078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.645105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.645310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.645337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.645541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.645570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.645735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.645765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.645991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.646019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.646192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.646218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.646413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.646442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.646624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.646671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.646872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.646900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.647046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.647073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.647302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.647332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.647550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.647595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.647787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.647814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.647980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.648006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.648164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.648190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.648338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.648365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.648533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.648559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.648696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d15b0 is same with the state(5) to be set 00:34:33.436 [2024-07-14 01:20:22.648925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.648965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.649196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.649226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.649406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.649432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.649603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.649630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.649856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.649910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.650066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.650094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.650307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.650336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.650571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.436 [2024-07-14 01:20:22.650618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.436 qpair failed and we were unable to recover it. 00:34:33.436 [2024-07-14 01:20:22.650829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.650877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.651052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.651080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.651277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.651307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.651514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.651559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.651770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.651816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.652031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.652057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.652229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.652256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.652419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.652447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.652629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.652656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.652880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.652922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.653070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.653096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.653321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.653354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.653555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.653582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.653774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.653801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.653982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.654009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.654155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.654181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.654347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.654374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.654558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.654586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.654796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.654823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.655017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.655043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.655232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.655259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.655424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.655451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.655604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.655646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.655831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.655875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.656036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.656062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.656254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.656282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.656508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.656550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.656731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.656759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.656963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.656990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.657132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.657158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.437 qpair failed and we were unable to recover it. 00:34:33.437 [2024-07-14 01:20:22.657362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.437 [2024-07-14 01:20:22.657388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.657576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.657603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.657798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.657824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.657971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.657998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.658189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.658215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.658396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.658424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.658642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.658668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.658889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.658932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.659108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.659137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.659349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.659374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.659572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.659596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.659775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.659798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.659998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.660023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.660191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.660215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.660368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.660392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.660569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.660593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.660774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.660799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.660947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.660972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.661124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.661151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.661326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.661352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.661505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.661531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.661676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.661711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.661862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.661895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.662067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.662093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.662270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.662297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.662499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.662525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.662679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.662705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.662887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.662913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.663062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.663088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.663232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.663258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.663435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.663461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.438 qpair failed and we were unable to recover it. 00:34:33.438 [2024-07-14 01:20:22.663607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.438 [2024-07-14 01:20:22.663633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.663845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.663875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.664085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.664110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.664293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.664319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.664504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.664534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.664698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.664724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.664899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.664926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.665071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.665098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.665279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.665305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.665506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.665532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.665693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.665719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.665910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.665936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.666114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.666140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.666350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.666384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.666533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.666559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.666729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.666755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.666944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.666980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.667126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.667163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.667369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.667395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.667595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.667621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.667776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.667803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.667987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.668013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.668188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.668213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.668409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.668435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.668584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.668610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.668753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.668779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.668978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.669005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.669183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.669209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.669349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.669375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.669517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.669544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.669720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.439 [2024-07-14 01:20:22.669746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.439 qpair failed and we were unable to recover it. 00:34:33.439 [2024-07-14 01:20:22.669890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.669917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.670070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.670096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.670245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.670271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.670451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.670476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.670649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.670675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.670846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.670884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.671028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.671054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.671198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.671224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.671419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.671445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.671619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.671645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.671824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.671850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.672062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.672088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.672267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.672293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.672460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.672486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.672655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.672685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.672888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.672915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.673085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.673111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.673285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.673311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.673489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.673516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.673686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.673712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.673913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.673939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.674112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.674138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.674319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.674345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.674483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.674509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.674685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.674711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.674899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.674925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.675098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.675124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.675265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.675290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.675473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.675499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.675642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.675668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.675847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.675878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.440 [2024-07-14 01:20:22.676065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.440 [2024-07-14 01:20:22.676090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.440 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.676265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.676290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.676470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.676496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.676668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.676694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.676880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.676907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.677083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.677118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.677319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.677357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.677541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.677567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.677713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.677739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.677879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.677906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.678117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.678147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.678295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.678322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.678502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.678528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.678722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.678748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.678925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.678951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.679130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.679156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.679328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.679354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.679511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.679538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.679741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.679770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.679972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.679998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.680139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.680166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.680368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.680393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.680561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.680587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.680788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.680817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.441 qpair failed and we were unable to recover it. 00:34:33.441 [2024-07-14 01:20:22.681031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.441 [2024-07-14 01:20:22.681058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.681235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.681261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.681486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.681514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.681730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.681756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.681903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.681929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.682109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.682151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.682318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.682345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.682583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.682622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.682847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.682878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.683093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.683119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.683328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.683355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.683506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.683532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.683706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.683732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.684013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.684045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.684224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.684250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.684393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.684419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.684570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.684596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.684746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.684772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.684957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.684984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.685139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.685165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.685334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.685361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.685535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.685561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.685746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.685773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.685953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.685980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.686153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.686179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.686379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.686405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.686549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.686575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.686751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.686788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.686968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.686995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.687170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.687196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.687400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.687426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.442 qpair failed and we were unable to recover it. 00:34:33.442 [2024-07-14 01:20:22.687627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.442 [2024-07-14 01:20:22.687654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.687831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.687874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.688042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.688068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.688242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.688269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.688474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.688500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.688694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.688720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.688904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.688931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.689109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.689135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.689306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.689332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.689500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.689526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.689745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.689771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.689943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.689970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.690174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.690204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.690418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.690444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.690622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.690649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.690820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.690846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.691064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.691090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.691265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.691291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.691521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.691547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.691689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.691716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.691898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.691925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.692124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.692150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.692296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.692323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.692490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.692517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.692698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.692724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.692921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.692974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.693219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.693245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.693418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.693444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.693653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.693679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.693884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.693910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.694090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.694116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.443 [2024-07-14 01:20:22.694294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.443 [2024-07-14 01:20:22.694320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.443 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.694521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.694547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.694751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.694778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.694951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.694977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.695153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.695179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.695348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.695375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.695546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.695572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.695782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.695808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.696051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.696080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.696274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.696300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.696499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.696525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.696694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.696739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.696946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.696973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.697153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.697179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.697381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.697410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.697626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.697655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.697883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.697910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.698091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.698117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.698267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.698293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.698435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.698465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.698656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.698682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.698859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.698890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.699059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.699095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.699244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.699271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.699469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.699495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.699670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.699696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.699846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.699877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.700032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.700058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.700229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.700255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.700430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.700456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.700627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.700653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.444 qpair failed and we were unable to recover it. 00:34:33.444 [2024-07-14 01:20:22.700799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.444 [2024-07-14 01:20:22.700826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.701073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.701100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.701268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.701294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.701467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.701494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.701673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.701699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.701880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.701909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.702123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.702160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.702311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.702337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.702532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.702561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.702757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.702785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.703018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.703044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.703251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.703279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.703489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.703515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.703733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.703761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.703945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.703973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.704166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.704196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.704354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.704380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.704535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.704562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.704728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.704754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.704913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.704939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.705092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.705120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.705298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.705324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.705487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.705514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.705694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.705721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.705881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.705908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.706099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.706127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.706287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.706314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.706482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.706509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.706729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.706759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.706991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.707018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.707172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.707199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.707397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.707423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.445 [2024-07-14 01:20:22.707631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.445 [2024-07-14 01:20:22.707660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.445 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.707891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.707927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.708168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.708195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.708416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.708442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.708620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.708646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.708850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.708900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.709067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.709094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.709285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.709310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.709486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.709512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.709662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.709688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.709931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.709978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.710196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.710223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.710383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.710410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.710578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.710604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.710795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.710824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.711010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.711038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.711262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.711287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.711482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.711511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.711797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.711845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.712067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.712093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.712301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.712330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.712507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.712532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.712681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.712707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.712907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.712951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.713118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.713146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.713364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.713390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.713616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.713644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.713860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.713896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.714112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.714138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.714333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.714361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.714553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.714581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.714783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.714809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.714986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.446 [2024-07-14 01:20:22.715015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.446 qpair failed and we were unable to recover it. 00:34:33.446 [2024-07-14 01:20:22.715182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.715210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.715384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.715409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.715607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.715633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.715881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.715909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.716081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.716108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.716265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.716291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.716467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.716493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.716635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.716661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.716827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.716882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.717123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.717149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.717325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.717351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.717529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.717555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.717781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.717810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.718008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.718035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.718230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.718257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.718483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.718512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.718689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.718715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.718893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.718920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.719092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.719123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.719327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.719353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.719495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.719521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.719695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.719721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.447 [2024-07-14 01:20:22.719870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.447 [2024-07-14 01:20:22.719896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.447 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.720044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.720070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.720225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.720251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.720385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.720411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.720632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.720660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.720837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.720884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.721092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.721118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.721327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.721368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.721575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.721603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.721821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.721847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.722086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.722114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.722277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.722305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.722503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.722529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.722723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.722751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.722946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.722975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.723142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.723172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.723365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.723394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.723615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.723641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.723810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.723836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.724026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.724055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.724244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.724273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.724467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.724494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.724675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.724701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.724890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.724924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.725126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.725164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.725358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.725386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.725575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.725604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.725823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.725852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.726019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.726045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.726264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.726293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.726503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.726529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.726682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.448 [2024-07-14 01:20:22.726708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.448 qpair failed and we were unable to recover it. 00:34:33.448 [2024-07-14 01:20:22.726925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.726953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.727148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.727174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.727370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.727398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.727580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.727608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.727826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.727863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.728067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.728094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.728287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.728315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.728517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.728543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.728740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.728768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.728964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.728992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.729162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.729188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.729336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.729362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.729559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.729585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.729768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.729794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.730000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.730026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.730261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.730288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.730480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.730506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.730728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.730755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.730956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.730988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.731180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.731206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.731422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.731449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.731623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.731650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.731840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.731876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.732097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.732124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.732325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.732351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.732520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.732546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.732745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.732772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.732926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.732953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.733154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.733180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.733387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.733414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.449 qpair failed and we were unable to recover it. 00:34:33.449 [2024-07-14 01:20:22.733616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.449 [2024-07-14 01:20:22.733643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.733827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.733852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.734050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.734076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.734243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.734269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.734446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.734472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.734672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.734698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.734886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.734912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.735092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.735118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.735269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.735295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.735470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.735496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.735701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.735726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.735911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.735937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.736120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.736147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.736346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.736372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.736521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.736547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.736716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.736742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.736894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.736921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.737090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.737116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.737296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.737322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.737486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.737513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.737689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.737716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.737884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.737910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.738081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.738107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.738283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.738310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.738488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.738514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.738691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.738717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.738884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.738910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.739060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.739086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.739224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.739250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.739455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.739481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.739654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.450 [2024-07-14 01:20:22.739680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.450 qpair failed and we were unable to recover it. 00:34:33.450 [2024-07-14 01:20:22.739852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.739883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.740062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.740088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.740288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.740314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.740462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.740488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.740667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.740693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.740863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.740894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.741051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.741077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.741289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.741315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.741467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.741493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.741701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.741727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.741898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.741924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.742119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.742145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.742349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.742375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.742528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.742554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.742729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.742755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.742924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.742951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.743097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.743123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.743292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.743318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.743517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.743543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.743695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.743721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.743921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.743948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.744116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.744142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.744318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.744344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.744519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.744545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.744733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.744759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.744943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.744973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.745141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.745178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.745347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.745373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.745538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.745564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.745741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.745767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.745966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.745992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.451 [2024-07-14 01:20:22.746169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.451 [2024-07-14 01:20:22.746195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.451 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.746364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.746390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.746564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.746591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.746761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.746787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.746959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.746986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.747132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.747158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.747325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.747351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.747493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.747519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.747722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.747748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.747899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.747927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.748098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.748124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.748289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.748315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.748516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.748542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.748707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.748733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.748907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.748934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.749107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.749133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.749305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.749331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.749506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.749532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.749710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.749736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.749911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.749937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.750118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.750148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.750296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.750327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.750474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.750500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.750696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.750722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.750967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.750994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.751244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.751270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.751477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.751503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.751649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.751675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.751876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.751903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.752049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.752075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.452 [2024-07-14 01:20:22.752276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.452 [2024-07-14 01:20:22.752302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.452 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.752489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.752518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.752715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.752741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.752918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.752944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.753123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.753166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.753348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.753375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.753549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.753574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.753773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.753801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.754003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.754030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.754218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.754244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.754442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.754471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.754666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.754692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.754838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.754871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.755051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.755078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.755251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.755277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.755475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.755504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.755755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.755784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.755979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.756007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.756200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.756227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.756405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.756446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.756616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.756643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.756833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.756873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.757061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.757087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.757253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.757280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.453 qpair failed and we were unable to recover it. 00:34:33.453 [2024-07-14 01:20:22.757457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.453 [2024-07-14 01:20:22.757483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.757654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.757680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.757888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.757915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.758061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.758087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.758286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.758312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.758510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.758536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.758762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.758791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.759018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.759044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.759206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.759232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.759384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.759410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.759589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.759615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.759821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.759848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.760061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.760087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.760247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.760276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.760456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.760482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.760627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.760669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.760819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.760848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.761051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.761077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.761266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.761292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.761464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.761491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.761669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.761694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.761890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.761917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.762066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.762092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.762287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.762314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.762468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.762494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.762640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.762668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.762849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.762879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.763079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.763105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.763241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.763267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.763443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.763469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.454 qpair failed and we were unable to recover it. 00:34:33.454 [2024-07-14 01:20:22.763618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.454 [2024-07-14 01:20:22.763646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.763822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.763848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.764030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.764056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.764260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.764289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.764478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.764507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.764708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.764738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.764948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.764991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.765200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.765230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.765442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.765468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.765655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.765684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.765883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.765912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.766109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.766136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.766358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.766386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.766559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.766585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.766785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.766811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.767011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.767040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.767229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.767263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.767469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.767496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.767649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.767675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.767905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.767950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.768137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.768171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.768370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.768399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.768602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.768631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.768796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.768822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.769002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.769029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.769205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.769233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.455 [2024-07-14 01:20:22.769383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.455 [2024-07-14 01:20:22.769420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.455 qpair failed and we were unable to recover it. 00:34:33.739 [2024-07-14 01:20:22.769598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.739 [2024-07-14 01:20:22.769625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.739 qpair failed and we were unable to recover it. 00:34:33.739 [2024-07-14 01:20:22.769862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.739 [2024-07-14 01:20:22.769910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.739 qpair failed and we were unable to recover it. 00:34:33.739 [2024-07-14 01:20:22.770124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.739 [2024-07-14 01:20:22.770163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.739 qpair failed and we were unable to recover it. 00:34:33.739 [2024-07-14 01:20:22.770355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.739 [2024-07-14 01:20:22.770382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.770597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.770652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.770826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.770856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.771059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.771087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.771303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.771333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.771529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.771556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.771775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.771803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.771974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.772004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.772234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.772261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.772412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.772438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.772641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.772671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.772845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.772888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.773058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.773098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.773262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.773290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.773465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.773494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.773734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.773763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.773979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.774006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.774197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.774233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.774455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.774484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.774640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.774668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.774863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.774902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.775061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.775088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.775289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.775317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.775498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.775527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.775717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.775744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.775939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.775968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.776143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.776169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.776359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.776386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.776580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.776610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.776788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.776819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.777000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.740 [2024-07-14 01:20:22.777027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.740 qpair failed and we were unable to recover it. 00:34:33.740 [2024-07-14 01:20:22.777240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.777270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.777471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.777508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.777712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.777755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.777980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.778015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.778229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.778255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.778481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.778512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.778738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.778768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.778983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.779018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.779165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.779191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.779386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.779418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.779618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.779644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.779844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.779893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.780078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.780107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.780330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.780356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.780517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.780543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.780727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.780754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.780952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.780979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.781172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.781197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.781364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.781390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.781563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.781589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.781791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.781820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.782023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.782051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.782283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.782309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.782508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.782537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.782756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.782784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.782961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.782987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.783167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.783193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.783366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.783409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.783630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.783656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.783848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.783888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.784062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.784090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.741 qpair failed and we were unable to recover it. 00:34:33.741 [2024-07-14 01:20:22.784280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.741 [2024-07-14 01:20:22.784306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.784464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.784494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.784692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.784718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.784890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.784916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.785117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.785164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.785370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.785399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.785595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.785621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.785812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.785841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.786095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.786123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.786317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.786343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.786544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.786573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.786804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.786830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.787017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.787058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.787244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.787272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.787522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.787565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.787931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.787976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.788134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.788165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.788347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.788373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.788734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.788786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.788992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.789019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.789232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.789262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.789469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.789497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.789760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.789804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.790000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.790027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.790232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.790276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.790482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.790525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.790708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.790735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.790921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.790949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.791203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.791248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.791479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.742 [2024-07-14 01:20:22.791522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.742 qpair failed and we were unable to recover it. 00:34:33.742 [2024-07-14 01:20:22.791720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.791747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.791940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.791983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.792199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.792242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.792445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.792489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.792668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.792694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.792851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.792883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.793090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.793119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.793324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.793368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.793569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.793614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.793792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.793819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.794025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.794070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.794312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.794354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.794538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.794569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.794786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.794813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.795008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.795051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.795229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.795272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.795510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.795540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.795741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.795769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.795965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.796013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.796215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.796258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.796474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.796518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.796670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.796707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.796849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.796880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.797116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.797158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.797328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.797370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.797597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.797639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.797786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.797813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.798003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.798045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.798254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.798297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.798501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.798544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.798714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.798741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.798961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.743 [2024-07-14 01:20:22.799005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.743 qpair failed and we were unable to recover it. 00:34:33.743 [2024-07-14 01:20:22.799223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.799267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.799497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.799524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.799710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.799737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.799931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.799974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.800175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.800201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.800374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.800419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.800574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.800602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.800783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.800810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.801046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.801090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.801285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.801327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.801569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.801599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.801760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.801787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.802005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.802048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.802269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.802320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.802539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.802580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.802779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.802805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.802982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.803025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.803217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.803261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.803486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.803530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.803709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.803737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.803933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.803978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.804121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.804147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.804382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.804426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.804603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.804631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.804842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.804873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.805061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.805110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.805331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.805378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.805572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.805615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.805792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.744 [2024-07-14 01:20:22.805819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.744 qpair failed and we were unable to recover it. 00:34:33.744 [2024-07-14 01:20:22.806037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.806082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.806310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.806355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.806560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.806603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.806783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.806809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.807008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.807052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.807227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.807273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.807487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.807516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.807706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.807734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.807955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.807998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.808182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.808209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.808387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.808430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.808614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.808640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.808817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.808843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.809052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.809094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.809333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.809376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.809544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.809587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.809757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.809783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.809954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.809997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.810190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.810217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.810469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.810511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.810661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.810687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.745 [2024-07-14 01:20:22.810860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.745 [2024-07-14 01:20:22.810892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.745 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.811062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.811105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.811271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.811313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.811552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.811595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.811745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.811772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.811983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.812037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.812245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.812294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.812535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.812578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.812779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.812805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.812988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.813032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.813267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.813311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.813540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.813584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.813728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.813756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.813954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.813998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.814240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.814283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.814517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.814546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.814762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.814792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.814984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.815027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.815268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.815312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.815547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.815576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.815795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.815822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.816051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.816094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.816324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.816368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.816579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.816624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.816831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.816873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.817042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.746 [2024-07-14 01:20:22.817083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.746 qpair failed and we were unable to recover it. 00:34:33.746 [2024-07-14 01:20:22.817330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.817374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.817572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.817617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.817791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.817817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.818023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.818065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.818289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.818333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.818508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.818553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.818730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.818757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.818961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.819005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.819210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.819254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.819454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.819483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.819679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.819707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.819890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.819916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.820109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.820153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.820385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.820429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.820668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.820709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.820886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.820913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.821068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.821096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.821331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.821376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.821550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.821593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.821770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.821797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.821970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.821997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.822206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.822235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.822455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.822497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.822681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.822708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.822908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.822935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.823116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.823147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.747 qpair failed and we were unable to recover it. 00:34:33.747 [2024-07-14 01:20:22.823397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.747 [2024-07-14 01:20:22.823441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.823645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.823688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.823860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.823892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.824072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.824098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.824288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.824335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.824531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.824573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.824776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.824802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.825013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.825040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.825273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.825317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.825550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.825593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.825807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.825834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.826048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.826075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.826312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.826355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.826589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.826631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.826846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.826877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.827081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.827108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.827331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.827372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.827585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.827628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.827836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.827862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.828021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.828047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.828259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.828285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.828517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.828560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.828831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.828886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.829091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.829117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.829352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.829395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.829629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.829673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.829848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.829885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.748 qpair failed and we were unable to recover it. 00:34:33.748 [2024-07-14 01:20:22.830062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.748 [2024-07-14 01:20:22.830088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.830290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.830334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.830536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.830579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.830782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.830808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.831017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.831045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.831290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.831332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.831571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.831613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.831785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.831812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.832050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.832093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.832266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.832309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.832533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.832577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.832776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.832802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.832991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.833017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.833218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.833262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.833464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.833508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.833712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.833739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.833932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.833959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.834172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.834218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.834455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.834499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.834670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.834700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.834912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.834942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.835188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.835232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.835425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.835469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.835668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.835712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.835885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.835912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.836072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.836115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.749 [2024-07-14 01:20:22.836352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.749 [2024-07-14 01:20:22.836394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.749 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.836638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.836668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.836882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.836909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.837137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.837190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.837431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.837476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.837682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.837724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.837947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.837991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.838229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.838273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.838503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.838547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.838714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.838740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.838930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.838974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.839209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.839253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.839485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.839529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.839736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.839763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.839960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.840005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.840180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.840207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.840385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.840428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.840615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.840658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.840839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.840875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.841074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.841117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.841327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.841370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.841570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.841613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.841788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.841814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.842006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.842033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.842235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.842278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.842429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.842456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.842651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.842695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.842894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.750 [2024-07-14 01:20:22.842922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.750 qpair failed and we were unable to recover it. 00:34:33.750 [2024-07-14 01:20:22.843135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.843179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.843387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.843430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.843606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.843633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.843814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.843845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.844085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.844128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.844313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.844357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.844552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.844581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.844782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.844808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.844989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.845017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.845238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.845267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.845425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.845454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.845616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.845645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.845844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.845907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.846100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.846126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.846316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.846345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.846545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.846588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.846763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.846792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.847001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.847029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.847225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.847254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.847476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.847505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.847721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.847751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.847970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.847997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.848175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.848201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.848434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.848463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.848687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.848716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.848907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.848934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.751 qpair failed and we were unable to recover it. 00:34:33.751 [2024-07-14 01:20:22.849081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.751 [2024-07-14 01:20:22.849107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.752 qpair failed and we were unable to recover it. 00:34:33.752 [2024-07-14 01:20:22.849330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.752 [2024-07-14 01:20:22.849359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.752 qpair failed and we were unable to recover it. 00:34:33.752 [2024-07-14 01:20:22.849747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.752 [2024-07-14 01:20:22.849805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.752 qpair failed and we were unable to recover it. 00:34:33.752 [2024-07-14 01:20:22.850029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.850056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.850202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.850232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.850562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.850618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.850844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.850883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.851053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.851079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.851288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.851315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.851525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.851553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.851739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.851768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.851960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.851987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.852177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.852206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.852411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.852440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.852662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.852688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.852884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.852913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.853111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.853137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.853334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.853360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.853595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.853623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.853853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.853890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.854084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.854109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.854328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.854356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.854556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.854584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.854805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.854831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.855015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.855042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.855266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.855295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.855517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.753 [2024-07-14 01:20:22.855543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.753 qpair failed and we were unable to recover it. 00:34:33.753 [2024-07-14 01:20:22.855771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.855797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.855974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.856000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.856200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.856226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.856446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.856488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.856722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.856749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.856963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.856989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.857170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.857198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.857399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.857426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.857640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.857666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.857847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.857888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.858042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.858069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.858279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.858305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.858508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.858535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.858749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.858775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.858976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.859002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.859202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.859232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.859417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.859445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.859666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.859692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.859899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.859928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.860099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.860127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.860324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.860350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.860505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.860531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.860736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.860762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.860969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.860996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.861218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.861246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.861443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.861472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.754 [2024-07-14 01:20:22.861664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.754 [2024-07-14 01:20:22.861690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.754 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.861873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.861900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.862103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.862131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.862338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.862364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.862585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.862611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.862768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.862794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.862978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.863004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.863226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.863254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.863485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.863511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.863704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.863731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.863967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.863996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.864167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.864196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.864417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.864443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.864647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.864673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.864874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.864904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.865109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.865135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.865286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.865312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.865483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.865508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.865686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.865712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.865885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.865916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.866071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.866097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.866249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.866275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.866444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.866470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.866671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.866697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.866911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.866937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.867141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.867169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.867357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.867385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.867582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.867607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.867810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.867838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.868030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.868059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.868280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.868306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.868476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.868505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.868725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.868753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.868961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.868988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.869181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.869209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.869394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.869422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.755 [2024-07-14 01:20:22.869622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.755 [2024-07-14 01:20:22.869647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.755 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.869808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.869836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.870040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.870068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.870268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.870294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.870467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.870495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.870708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.870737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.870943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.870970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.871228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.871274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.871503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.871534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.871713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.871739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.871949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.871986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.872187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.872215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.872389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.872415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.872766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.872829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.873034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.873062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.873214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.873241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.873385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.873411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.873592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.873618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.873823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.873854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.874059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.874085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.874264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.874291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.874441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.874467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.874696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.874725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.874922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.874951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.875155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.875181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.875375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.875404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.875598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.875627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.875846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.875887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.876068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.876094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.876273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.876300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.876502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.876529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.876724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.876753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.876971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.877001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.877205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.877231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.877555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.877609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.877805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.877834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.878030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.878057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.878273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.878318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.878521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.878551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.878778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.878804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.756 qpair failed and we were unable to recover it. 00:34:33.756 [2024-07-14 01:20:22.879019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.756 [2024-07-14 01:20:22.879047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.879243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.879271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.879467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.879493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.879804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.879876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.880074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.880100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.880313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.880339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.880686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.880737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.880975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.881002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.881149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.881175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.881513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.881571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.881764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.881793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.882009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.882035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.882228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.882257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.882472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.882498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.882643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.882669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.882862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.882897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.883086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.883112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.883289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.883315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.883507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.883536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.883750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.883776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.883951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.883977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.884161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.884190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.884356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.884384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.884575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.884601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.884770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.884799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.884977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.885003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.885181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.885207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.885412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.885438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.885586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.885612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.885791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.885816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.886030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.886057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.886260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.886288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.886513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.886539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.886741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.886769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.886953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.886981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.887153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.887179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.887397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.887426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.887653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.887679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.887862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.887896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.888097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.888123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.888301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.888330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.757 qpair failed and we were unable to recover it. 00:34:33.757 [2024-07-14 01:20:22.888530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.757 [2024-07-14 01:20:22.888556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.888786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.888814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.889035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.889061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.889262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.889288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.889559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.889608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.889825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.889871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.890062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.890088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.890312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.890341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.890535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.890564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.890761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.890787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.890942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.890972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.891128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.891153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.891308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.891336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.891510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.891537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.891737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.891765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.891987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.892014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.892239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.892268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.892488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.892517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.892712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.892738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.892928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.892958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.893125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.893154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.893348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.893373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.893597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.893626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.893822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.893851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.894052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.894078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.894281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.894309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.894506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.894534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.894764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.894790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.895019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.895047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.895248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.895274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.895425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.895450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.895630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.895655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.895806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.895834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.896007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.896034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.758 qpair failed and we were unable to recover it. 00:34:33.758 [2024-07-14 01:20:22.896223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.758 [2024-07-14 01:20:22.896251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.896448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.896476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.896640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.896666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.896857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.896897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.897124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.897149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.897320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.897345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.897516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.897544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.897737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.897766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.897956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.897982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.898161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.898191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.898366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.898392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.898630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.898656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.898891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.898921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.899120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.899148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.899314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.899340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.899559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.899587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.899770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.899797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.900004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.900031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.900251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.900279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.900446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.900474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.900675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.900701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.900911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.900940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.901125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.901153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.901324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.901350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.901571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.901600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.901815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.901844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.902059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.902086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.902305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.902334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.902537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.902566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.902760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.902786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.902978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.903007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.903233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.903259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.903409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.903435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.903580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.903607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.903791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.903819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.904024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.904050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.904272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.904300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.904520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.904549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.904721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.904749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.904971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.759 [2024-07-14 01:20:22.905000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.759 qpair failed and we were unable to recover it. 00:34:33.759 [2024-07-14 01:20:22.905171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.905199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.905382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.905408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.905604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.905632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.905825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.905863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.906025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.906051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.906191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.906217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.906416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.906446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.906618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.906645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.906840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.906876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.907072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.907101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.907301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.907327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.907548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.907576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.907771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.907800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.908000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.908026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.908253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.908281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.908497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.908526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.908692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.908718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.908889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.908919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.909094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.909123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.909290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.909316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.909536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.909565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.909779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.909807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.910019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.910045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.910243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.910272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.910467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.910496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.910688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.910714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.910930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.910956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.911132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.911158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.911329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.911355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.911575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.911604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.911836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.911872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.912082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.912114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.912312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.912341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.912557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.912586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.912763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.912790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.912937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.912981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.913200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.913229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.913427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.913453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.913645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.913674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.913875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.913902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.760 [2024-07-14 01:20:22.914105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.760 [2024-07-14 01:20:22.914132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.760 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.914341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.914370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.914559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.914587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.914784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.914810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.915011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.915040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.915257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.915286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.915488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.915514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.915686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.915712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.915921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.915950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.916175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.916201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.916379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.916405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.916582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.916608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.916783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.916809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.916979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.917009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.917174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.917203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.917389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.917415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.917570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.917597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.917816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.917845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.918015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.918045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.918221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.918250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.918465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.918493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.918665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.918692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.918891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.918920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.919140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.919169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.919363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.919389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.919612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.919641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.919831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.919860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.920064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.920090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.920290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.920318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.920491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.920520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.920718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.920743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.920968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.920998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.921198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.921227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.921427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.921453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.921611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.921637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.921808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.921834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.922020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.922046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.922268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.922297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.922499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.922528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.922749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.922774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.922972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.923003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.923232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.761 [2024-07-14 01:20:22.923261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.761 qpair failed and we were unable to recover it. 00:34:33.761 [2024-07-14 01:20:22.923486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.923512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.923712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.923740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.923955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.923984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.924162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.924192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.924344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.924370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.924559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.924587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.924782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.924808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.925008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.925037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.925219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.925248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.925441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.925467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.925655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.925684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.925850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.925892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.926093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.926120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.926277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.926304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.926449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.926490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.926682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.926707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.926860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.926896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.927059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.927085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.927226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.927252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.927474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.927502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.927697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.927725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.927925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.927953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.928119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.928148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.928322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.928351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.928519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.928545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.928723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.928749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.928895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.928921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.929121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.929147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.929352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.929381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.929596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.929625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.929794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.929820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.930033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.930060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.930294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.930323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.930494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.930521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.762 qpair failed and we were unable to recover it. 00:34:33.762 [2024-07-14 01:20:22.930742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.762 [2024-07-14 01:20:22.930771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.930989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.931018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.931196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.931222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.931397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.931423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.931658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.931686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.931859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.931891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.932062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.932091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.932283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.932311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.932486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.932512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.932689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.932716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.932928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.932962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.933159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.933185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.933406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.933434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.933633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.933662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.933888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.933914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.934110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.934138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.934334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.934363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.934560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.934586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.934747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.934775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.934994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.935023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.935225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.935251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.935477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.935505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.935679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.935708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.935937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.935963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.936157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.936186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.936343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.936372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.936592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.936617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.936776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.936804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.936981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.937007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.937195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.937221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.937367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.937393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.937583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.937612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.937803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.937829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.938031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.938071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.938282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.938310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.938485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.938511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.938730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.938756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.938952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.763 [2024-07-14 01:20:22.938984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.763 qpair failed and we were unable to recover it. 00:34:33.763 [2024-07-14 01:20:22.939169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.939195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.939384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.939410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.939611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.939637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.939843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.939884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.940069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.940095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.940273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.940300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.940512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.940538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.940740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.940766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.940948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.940975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.941179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.941218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.941399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.941425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.941572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.941598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.941824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.941854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.942089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.942124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.942313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.942340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.942523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.942550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.942750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.942776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.942997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.943023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.943171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.943197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.943373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.943399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.943597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.943626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.943812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.943838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.944020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.944047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.944220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.944246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.944399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.944425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.944723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.944781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.945034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.945060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.945215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.945241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.945425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.945451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.945623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.945649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.945822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.945848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.946031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.946057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.946258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.946285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.946425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.946462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.946640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.946667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.946811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.946838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.947051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.947077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.947253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.947279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.947486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.947512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.947664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.947695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.764 [2024-07-14 01:20:22.947877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.764 [2024-07-14 01:20:22.947904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.764 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.948112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.948138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.948349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.948376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.948544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.948588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.948774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.948803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.948997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.949024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.949229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.949273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.949481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.949510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.949706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.949732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.949965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.949992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.950181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.950207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.950354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.950380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.950553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.950579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.950768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.950795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.950989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.951015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.951187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.951213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.951362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.951390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.951576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.951602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.951839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.951874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.952073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.952101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.952259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.952286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.952487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.952513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.952695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.952721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.952943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.952969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.953170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.953196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.953370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.953408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.953606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.953633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.953810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.953836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.954066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.954093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.954283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.954309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.954502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.954581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.954784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.954813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.955033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.955059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.955287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.955316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.955481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.955510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.955694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.955730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.955880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.955907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.956110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.956137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.956288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.956325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.956530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.956560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.956753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.956795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.765 qpair failed and we were unable to recover it. 00:34:33.765 [2024-07-14 01:20:22.957009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.765 [2024-07-14 01:20:22.957036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.957210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.957242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.957447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.957476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.957670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.957697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.957903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.957944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.958145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.958184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.958383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.958410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.958635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.958664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.958836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.958871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.959067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.959094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.959272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.959298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.959484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.959510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.959687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.959727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.959944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.959974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.960135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.960163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.960380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.960426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.960626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.960672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.960852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.960885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.961075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.961101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.961340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.961383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.961623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.961666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.961848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.961880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.962045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.962071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.962280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.962310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.962645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.962702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.962898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.962928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.963130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.963177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.963384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.963428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.963664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.963708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.963889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.963917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.964088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.964134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.964348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.964391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.964604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.964649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.964828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.964871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.965021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.965051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.965258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.965301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.965570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.965621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.965803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.965830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.966038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.966085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.966259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.966289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.966443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.766 [2024-07-14 01:20:22.966470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.766 qpair failed and we were unable to recover it. 00:34:33.766 [2024-07-14 01:20:22.966785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.966846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.967090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.967117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.967302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.967328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f18000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.967568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.967614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.967797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.967825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.968011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.968039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.968249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.968279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.968510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.968557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.968780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.968808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.968972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.969000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.969199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.969243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.969450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.969494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.969769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.969814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.969993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.970021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.970199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.970255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.970447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.970493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.970703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.970748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.970911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.970939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.971150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.971194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.971368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.971422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.971630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.971673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.971858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.971891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.972068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.972113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.972344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.972389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.972611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.972665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.972854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.972904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.973131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.973177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.973335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.973363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.973536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.973579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.973807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.973834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.974029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.974079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.974349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.974393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.974629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.974673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.974964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.767 [2024-07-14 01:20:22.974992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.767 qpair failed and we were unable to recover it. 00:34:33.767 [2024-07-14 01:20:22.975168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.975211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.975418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.975463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.975788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.975847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.976039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.976081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.976254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.976305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.976531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.976575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.976793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.976821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.977003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.977048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.977344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.977388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.977633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.977677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.977937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.977981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.978157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.978184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.978384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.978430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.978633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.978659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.978836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.978878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.979061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.979106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.979315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.979342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.979592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.979638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.979809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.979836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.980029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.980073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.980391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.980436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.980647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.980701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.980891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.980937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.981116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.981159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.981351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.981396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.981635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.981680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.981844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.981876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.982084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.982130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.982430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.982474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.982643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.982669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.982834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.982881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.983084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.983128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.983350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.983393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.983666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.983716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.983936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.983981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.984185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.984237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.984450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.984494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.984769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.984813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.985037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.985082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.985310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.985363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.985597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.768 [2024-07-14 01:20:22.985640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.768 qpair failed and we were unable to recover it. 00:34:33.768 [2024-07-14 01:20:22.985838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.985870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.986067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.986111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.986335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.986382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.986572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.986602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.986833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.986876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.987147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.987191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.987420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.987470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.987672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.987727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.987883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.987909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.988149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.988193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.988418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.988464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.988677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.988719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.988927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.988973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.989190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.989234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.989400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.989456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.989676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.989704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.989922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.989950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.990153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.990198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.990428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.990472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.990713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.990739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.990965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.991009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.991251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.991295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.991533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.991587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.991808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.991835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.992020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.992065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.992280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.992330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.992528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.992572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.992791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.992817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.993057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.993103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.993374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.993419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.993628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.993658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.993858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.993894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.994082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.994108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.994344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.994371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.994561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.994592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.994894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.994956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.995160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.995212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.995411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.995440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.995641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.995670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.995879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.995924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.769 [2024-07-14 01:20:22.996126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.769 [2024-07-14 01:20:22.996161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.769 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:22.996464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:22.996523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:22.996752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:22.996782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:22.996965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:22.996992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:22.997170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:22.997211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:22.997418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:22.997447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:22.997672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:22.997701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:22.997878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:22.997905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:22.998084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:22.998110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:22.998278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:22.998304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:22.998629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:22.998692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:22.998920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:22.998946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:22.999219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:22.999248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:22.999530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:22.999584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:22.999775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:22.999803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.000026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.000053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.000286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.000326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.000538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.000582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.000792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.000844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.001035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.001064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.001256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.001283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.001462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.001506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.001718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.001761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.001949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.001977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.002178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.002222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.002584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.002635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.002837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.002870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.003078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.003105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.003337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.003381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.003616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.003660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.003819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.003846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.004025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.004071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.004253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.004298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.004530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.004572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.004787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.004816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.005009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.005037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.005215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.005266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.005461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.005506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.005685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.005712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.005883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.005910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.770 [2024-07-14 01:20:23.006123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.770 [2024-07-14 01:20:23.006176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.770 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.006399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.006443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.006676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.006723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.006964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.007009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.007191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.007235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.007413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.007457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.007648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.007675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.007826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.007869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.008070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.008120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.008353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.008398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.008638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.008693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.008852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.008887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.009123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.009152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.009393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.009440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.009674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.009717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.009881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.009909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.010113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.010172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.010365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.010422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.010791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.010841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.011053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.011098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.011314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.011368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.011753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.011809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.012013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.012039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.012276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.012320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.012547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.012591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.012749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.012779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.012961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.012988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.013195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.013239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.013471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.013516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.013720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.013747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.013952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.013997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.014227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.014272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.014474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.014519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.014729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.014770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.014990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.771 [2024-07-14 01:20:23.015033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.771 qpair failed and we were unable to recover it. 00:34:33.771 [2024-07-14 01:20:23.015242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.015289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.015532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.015574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.015779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.015806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.016010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.016054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.016231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.016276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.016479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.016523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.016717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.016743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.016952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.016998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.017203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.017247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.017457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.017498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.017703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.017729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.017958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.018003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.018206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.018250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.018497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.018540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.018742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.018776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.018983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.019028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.019205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.019253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.019473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.019517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.019704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.019731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.019938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.019990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.020184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.020228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.020434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.020483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.020715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.020741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.020980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.021024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.021221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.021251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.021485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.021528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.021756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.021782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.021979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.022024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.022226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.022269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.022492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.022536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.022799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.022825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.023076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.023121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.023330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.023360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.023582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.023611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.023955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.023982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.024218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.024247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.024444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.024472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.024693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.024722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.024960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.024987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.025171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.772 [2024-07-14 01:20:23.025197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.772 qpair failed and we were unable to recover it. 00:34:33.772 [2024-07-14 01:20:23.025398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.025425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.025598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.025627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.025814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.025843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.026028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.026056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.026306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.026336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.026558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.026587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.026806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.026835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.027075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.027101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.027316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.027350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.027737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.027788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.028017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.028043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.028279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.028308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.028693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.028740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.028942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.028969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.029122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.029148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.029338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.029363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.029554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.029580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.029789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.029816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.030029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.030056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.030257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.030286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.030449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.030480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.030750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.030802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.031039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.031066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.031274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.031300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.031620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.031674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.031893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.031919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.032091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.032117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.032376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.032403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.032636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.032665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.032893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.032934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.033147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.033177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.033415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.033444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.033640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.033682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.033896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.033922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.034065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.034091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.034318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.034353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.034590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.034619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.034820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.034846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.035063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.035089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.035261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.035302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.035520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.035548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.773 qpair failed and we were unable to recover it. 00:34:33.773 [2024-07-14 01:20:23.035740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.773 [2024-07-14 01:20:23.035769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.035994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.036021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.036160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.036202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.036429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.036458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.036780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.036834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.037082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.037109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.037346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.037373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.037546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.037573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.037736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.037762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.037920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.037947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.038098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.038124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.038295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.038324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.038525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.038566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.038792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.038820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.039003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.039030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.039204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.039230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.039430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.039459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.039680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.039705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.039898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.039927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.040124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.040153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.040345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.040371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.040550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.040580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.040781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.040810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.040991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.041017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.041218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.041247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.041433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.041461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.041664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.041689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.041861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.041894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.042087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.042112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.042328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.042353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.042529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.042555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.042772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.042797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.042965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.042991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.043200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.043226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.043400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.043426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.043579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.043605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.043804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.043830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.044012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.044045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.044251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.044276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.044461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.044487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.044652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.044677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.774 qpair failed and we were unable to recover it. 00:34:33.774 [2024-07-14 01:20:23.044824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.774 [2024-07-14 01:20:23.044851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.045028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.045053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.045229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.045254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.045485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.045514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.045734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.045772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.046019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.046046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.046249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.046274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.046450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.046478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.046670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.046700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.046885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.046911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.047086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.047112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.047319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.047347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.047564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.047591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.047775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.047803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.048032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.048058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.048264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.048289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.048465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.048493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.048656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.048685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.048878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.048904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.049054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.049079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.049284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.049312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.049565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.049593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.049774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.049815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.050030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.050056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.050268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.050294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.050530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.050558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.050790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.050815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.051034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.051060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.051231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.051259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.051521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.051572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.051762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.051790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.052015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.052040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.052203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.052229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.052428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.052456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.052644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.052672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.052890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.052933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.053080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.053105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.053309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.053337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.775 [2024-07-14 01:20:23.053494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.775 [2024-07-14 01:20:23.053522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.775 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.053773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.053801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.054022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.054048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.054221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.054249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.054433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.054461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.054650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.054678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.054836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.054870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.055061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.055087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.055302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.055330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.055544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.055572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.055861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.055915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.056116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.056157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.056351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.056379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.056596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.056625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.056796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.056821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.057052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.057079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.057220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.057246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.057465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.057493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.057718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.057746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.057945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.057971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.058146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.058174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.058335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.058363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.058533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.058558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.058781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.058809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.058995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.059021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.059226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.059251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.059452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.059480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.059651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.059680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.059883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.059909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.060084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.060109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.060318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.060346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.060514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.060540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.060742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.060767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.060972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.060997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.061168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.061194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.061388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.061416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.061636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.061661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.061815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.776 [2024-07-14 01:20:23.061846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.776 qpair failed and we were unable to recover it. 00:34:33.776 [2024-07-14 01:20:23.062055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.062080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.062317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.062345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.062563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.062588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.062796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.062821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.062971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.062997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.063172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.063198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.063338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.063364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.063537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.063562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.063730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.063760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.063982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.064008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.064185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.064214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.064390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.064415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.064615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.064640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.064859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.064896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.065076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.065101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.065257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.065282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.065480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.065506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.065673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.065699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.065873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.065899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.066070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.066096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.066268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.066294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.066438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.066463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.066664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.066689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.066871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.066913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.067059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.067084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.067263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.067288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.067461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.067490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.067661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.067686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.067860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.067891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.068058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.068083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.068257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.068283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.068456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.068481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.068651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.068675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.068847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.068884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.069063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.069088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.069292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.069317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.069495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.069521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.069660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.069685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.069862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.069893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.070073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.070099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.070268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.070293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.777 qpair failed and we were unable to recover it. 00:34:33.777 [2024-07-14 01:20:23.070472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.777 [2024-07-14 01:20:23.070498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.070678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.070704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.070879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.070905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.071105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.071130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.071303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.071329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.071531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.071556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.071706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.071731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.071921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.071946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.072117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.072143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.072313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.072338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.072538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.072563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.072734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.072760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.072941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.072966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.073114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.073140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.073336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.073361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.073534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.073559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.073731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.073756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.073950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.073976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.074118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.074143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.074345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.074371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.074541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.074566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.074744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.074769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.074972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.074998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.075172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.075197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.075347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.075372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.075569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.075595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.075772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.075801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.075990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.076016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.076167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.076192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.076359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.076384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.076571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.076596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.076763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.076791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.076977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.077003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.077147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.077188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.077381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.077406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.077615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.077640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.077844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.077878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.078072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.078098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.078303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.078328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.078479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.078504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.078657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.078683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.778 [2024-07-14 01:20:23.078889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.778 [2024-07-14 01:20:23.078914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.778 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.079097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.079122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.079324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.079349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.079492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.079518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.079712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.079740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.079945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.079972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.080175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.080200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.080373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.080399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.080569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.080595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.080734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.080759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.080937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.080963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.081165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.081190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.081368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.081399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.081546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.081571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.081708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.081734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.081883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.081909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.082106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.082131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.082333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.082359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.082497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.082522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.082662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.082687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.082859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.082890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.083072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.083097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.083250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.083275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.083477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.083502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.083677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.083702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.083882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.083908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.084088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.084113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.084257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.084283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.084435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.084460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.084663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.084689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.084884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.084930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.085107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.085132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.085312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.085338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.085537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.085562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.085787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.085815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.086056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.086082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.086235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.086260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.086433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.086458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.086634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.086675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.086898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.086942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.087133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.087158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.779 [2024-07-14 01:20:23.087333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.779 [2024-07-14 01:20:23.087358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.779 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.087545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.087570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.087804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.087831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.088057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.088082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.088294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.088319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.088527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.088555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.088804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.088832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.089092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.089120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.089361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.089389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.089725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.089774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.089975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.090001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.090197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.090223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.090374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.090399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.090568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.090596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.090784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.090811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.091000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.091029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.091251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.091279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.091497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.091524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.091764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.091792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.091997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.092023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.092198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.092223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.092485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.092537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.092791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.092818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.093025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.093050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.093212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.093237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.093413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.093440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.093586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.093612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.093814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.093843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.780 qpair failed and we were unable to recover it. 00:34:33.780 [2024-07-14 01:20:23.094044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.780 [2024-07-14 01:20:23.094069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.094270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.094295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.094474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.094500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.094702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.094727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.094881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.094908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.095078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.095103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.095279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.095304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.095454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.095479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.095665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.095690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.095889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.095915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.096057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.096082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.096268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.096294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.096478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.096503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.096678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.096703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.096879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.096913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.097092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.097118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.097296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.097321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.097499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.097524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.097706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.097732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.097936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.097961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.098140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.098165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.098343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.098369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.098545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.098570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.098728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.098755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.098932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.098958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.099110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.099135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.099311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.099336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.099510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.099535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.099686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.099711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.099876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.099902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.100036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.100061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.100235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.100260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.100415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.100441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.100579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.100604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.100777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.100802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.101015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.101041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.101244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.101269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.101426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.101451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.101606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.101635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.101782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.101807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.101984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.102010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.781 [2024-07-14 01:20:23.102182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.781 [2024-07-14 01:20:23.102208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.781 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.102363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.102388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.102534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.102559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.102704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.102731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.102907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.102933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.103113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.103138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.103291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.103317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.103468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.103492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.103664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.103689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.103943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.103969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.104109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.104134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.104315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.104340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.104511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.104536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.104787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.104812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.105010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.105035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.105184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.105210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.105411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.105436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.105607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.105632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.105811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.105836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.106016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.106042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.106184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.106209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.106382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.106407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.106574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.106599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.106753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.106778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.106931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.106960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.107133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.107158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.107334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.107359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.107503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.107528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.107729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.107755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.107963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.107989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.108192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.108218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.108397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.108422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.108595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.108620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.108823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.108848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.109006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.109033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.109186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.109212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.109359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.782 [2024-07-14 01:20:23.109384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.782 qpair failed and we were unable to recover it. 00:34:33.782 [2024-07-14 01:20:23.109536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.109561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.109739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.109764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.109932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.109958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.110107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.110132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.110283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.110308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.110481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.110506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.110704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.110730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.110877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.110903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.111101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.111127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.111272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.111297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.111447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.111473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.111648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.111673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.111843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.111877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.112064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.112089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.112297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.112323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.112527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.112552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.112720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.112745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.112916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.112942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.113118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.113143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.113316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.113341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.113487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.113512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.113704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.113732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.113926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.113952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.114132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.114157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.114332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.114357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.114526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.114551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.114766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.114791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.114968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.114994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.115168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.115194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.115371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.115396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.115565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.115590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.115766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.115791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.115953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.115979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.116153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.116178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.116345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.116370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.116560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.116585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.116778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.116806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.116975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.117001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.117191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.117216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.117390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.117415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.117562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.783 [2024-07-14 01:20:23.117587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.783 qpair failed and we were unable to recover it. 00:34:33.783 [2024-07-14 01:20:23.117789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.117814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.118046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.118073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.118214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.118239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.118469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.118497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.118739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.118766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.118942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.118968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.119148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.119173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.119349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.119373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.119554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.119579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.119756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.119782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.119951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.119977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.120156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.120181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.120327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.120353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.120536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.120561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.120743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.120772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.120950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.120975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.121174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.121199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.121399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.121424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.121602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.121627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.121797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.121824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.122022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.122047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.122221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.122245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.122383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.122408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.122606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.122631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.122775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.122800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.123002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.123028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.123226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.123251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.123571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.123637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.123927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.123954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.124096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.124122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.124305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.124331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.124505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.124530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.124679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.124704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.124889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.124926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.784 [2024-07-14 01:20:23.125193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.784 [2024-07-14 01:20:23.125219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.784 qpair failed and we were unable to recover it. 00:34:33.785 [2024-07-14 01:20:23.125393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.785 [2024-07-14 01:20:23.125418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.785 qpair failed and we were unable to recover it. 00:34:33.785 [2024-07-14 01:20:23.125623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.785 [2024-07-14 01:20:23.125648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.785 qpair failed and we were unable to recover it. 00:34:33.785 [2024-07-14 01:20:23.125852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.785 [2024-07-14 01:20:23.125884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.785 qpair failed and we were unable to recover it. 00:34:33.785 [2024-07-14 01:20:23.126073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.785 [2024-07-14 01:20:23.126100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.785 qpair failed and we were unable to recover it. 00:34:33.785 [2024-07-14 01:20:23.126250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.785 [2024-07-14 01:20:23.126275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.785 qpair failed and we were unable to recover it. 00:34:33.785 [2024-07-14 01:20:23.126459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.785 [2024-07-14 01:20:23.126484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.785 qpair failed and we were unable to recover it. 00:34:33.785 [2024-07-14 01:20:23.126734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.785 [2024-07-14 01:20:23.126773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.785 qpair failed and we were unable to recover it. 00:34:33.785 [2024-07-14 01:20:23.126969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.785 [2024-07-14 01:20:23.126995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.785 qpair failed and we were unable to recover it. 00:34:33.785 [2024-07-14 01:20:23.127170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.785 [2024-07-14 01:20:23.127195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.785 qpair failed and we were unable to recover it. 00:34:33.785 [2024-07-14 01:20:23.127364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.785 [2024-07-14 01:20:23.127389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.785 qpair failed and we were unable to recover it. 00:34:33.785 [2024-07-14 01:20:23.127563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.785 [2024-07-14 01:20:23.127597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.785 qpair failed and we were unable to recover it. 00:34:33.785 [2024-07-14 01:20:23.127753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.785 [2024-07-14 01:20:23.127785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.785 qpair failed and we were unable to recover it. 00:34:33.785 [2024-07-14 01:20:23.128031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:33.785 [2024-07-14 01:20:23.128058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:33.785 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.128214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.128240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.128493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.128520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.128770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.128795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.128979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.129011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.129178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.129203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.129404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.129429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.129612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.129637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.129810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.129835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.130014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.130039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.130238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.130263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.130438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.130463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.130663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.130689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.130837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.130863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.131025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.131051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.131226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.131251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.131422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.131448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.131624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.131651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.131804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.131830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.132042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.132069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.132221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.132248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.132452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.132490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.132673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.132701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.132898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.132940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.133143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.133170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.133317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.133343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.133545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.133571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.133754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.133786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.133969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.133995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.134177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.134202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.134348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.134375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.058 qpair failed and we were unable to recover it. 00:34:34.058 [2024-07-14 01:20:23.134581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.058 [2024-07-14 01:20:23.134606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.134781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.134808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.135063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.135094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.135302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.135328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.135489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.135515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.135694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.135719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.135922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.135948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.136148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.136184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.136382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.136408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.136610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.136637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.136840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.136873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.137054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.137080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.137284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.137310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.137450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.137476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.137651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.137677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.137934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.137961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.138104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.138140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.138328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.138353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.138539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.138565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.138746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.138781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.138938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.138965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.139159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.139184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.139330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.139365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.139528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.139553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.139731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.139784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.140010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.140038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.140236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.140262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.140419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.140449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.140629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.140655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.140852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.140886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.141086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.141113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.141339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.141376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.141588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.141614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.141795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.141821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.141981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.142009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.142183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.142209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.142357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.142382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.142552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.142577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.142801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.142827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.143030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.143057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.143271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.143297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.143445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.143471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.143672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.143698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.143901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.143927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.144128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.144153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.144361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.144386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.144536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.144562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.144762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.144787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.144936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.144963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.145225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.145250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.145417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.145443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.145619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.145645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.145821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.145847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.146010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.146035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.146210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.146235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.146429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.146454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.146624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.059 [2024-07-14 01:20:23.146649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.059 qpair failed and we were unable to recover it. 00:34:34.059 [2024-07-14 01:20:23.146828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.146853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.147027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.147058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.147207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.147233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.147387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.147412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.147589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.147614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.147788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.147813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.148013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.148038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.148214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.148240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.148436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.148461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.148638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.148663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.148830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.148858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.149079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.149107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.149452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.149510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.149745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.149773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.149962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.149990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.150276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.150328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.150565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.150592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.150782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.150810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.151007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.151035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.151256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.151284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.151498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.151525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.151742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.151770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.152009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.152038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.152251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.152278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.152535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.152585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.152800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.152828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.153057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.153085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.153293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.153320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.153558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.153590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.153817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.153845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.154092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.154120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.154370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.154398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.154651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.154678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.154900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.154929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.155098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.155123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.155299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.155324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.155501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.155528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.155728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.155755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.155931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.155957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.156143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.156168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.156352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.156377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.156543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.156568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.156754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.156780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.156931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.156956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.157166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.157191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.157348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.157374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.157554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.157579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.157728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.157753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.157920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.157946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.158125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.158151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.158297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.158322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.158496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.158521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.158695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.158720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.158889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.158915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.159120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.159145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.159353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.159378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.159559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.159584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.060 [2024-07-14 01:20:23.159785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.060 [2024-07-14 01:20:23.159810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.060 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.160010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.160036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.160179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.160205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.160383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.160409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.160586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.160611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.160815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.160843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.161044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.161069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.161278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.161303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.161475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.161500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.161716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.161744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.161968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.161994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.162174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.162199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.162396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.162421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.162571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.162596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.162750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.162776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.162955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.162982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.163169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.163195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.163372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.163397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.163537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.163562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.163734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.163759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.163945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.163971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.164150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.164177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.164352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.164377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.164550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.164576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.164754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.164779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.164962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.164987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.165171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.165197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.165372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.165397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.165573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.165598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.165747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.165772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.165957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.165983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.166183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.166208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.166458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.166483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.166658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.166683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.166911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.166938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.167111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.167136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.167338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.167364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.167516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.167541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.167688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.167713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.167864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.167902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.168102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.168127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.168280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.168306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.168477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.168502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.168679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.168704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.168845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.168877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.169049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.169075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.169244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.169269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.169420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.169445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.061 qpair failed and we were unable to recover it. 00:34:34.061 [2024-07-14 01:20:23.169623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.061 [2024-07-14 01:20:23.169648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.169791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.169816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.169963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.169990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.170199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.170225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.170401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.170426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.170609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.170635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.170835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.170863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.171063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.171088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.171259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.171284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.171464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.171490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.171743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.171769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.171974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.172000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.172175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.172201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.172376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.172401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.172574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.172599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.172853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.172883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.173084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.173110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.173265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.173290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.173461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.173490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.173658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.173684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.173862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.173893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.174063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.174088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.174261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.174286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.174493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.174518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.174694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.174720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.174883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.174909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.175054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.175079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.175333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.175358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.175531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.175555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.175732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.175757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.175927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.175953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.176125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.176150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.176303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.176328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.176498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.176523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.176695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.176720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.176915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.176941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.177140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.177165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.177320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.177345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.177513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.177538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.177742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.177767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.177943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.177968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.178172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.178197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.178372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.178397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.178598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.178623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.178826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.178851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.179035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.179065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.179238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.179263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.179461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.179485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.179692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.179718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.179895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.179921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.180090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.180115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.180288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.180313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.180522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.180547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.180797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.180822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.181037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.181063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.181241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.181266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.181438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.181463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.181638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.181663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.062 qpair failed and we were unable to recover it. 00:34:34.062 [2024-07-14 01:20:23.181888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.062 [2024-07-14 01:20:23.181930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.182107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.182132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.182317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.182342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.182518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.182543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.182716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.182741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.182919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.182945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.183094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.183121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.183326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.183351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.183555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.183580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.183725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.183750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.183926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.183952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.184162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.184187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.184387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.184412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.184558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.184583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.184851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.184885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.185051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.185076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.185279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.185304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.185475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.185500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.185751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.185776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.185956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.185981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.186159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.186184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.186387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.186412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.186553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.186578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.186754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.186779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.186923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.186948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.187121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.187145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.187317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.187342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.187631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.187689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.187926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.187952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.188103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.188127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.188297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.188325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.188546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.188574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.188793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.188818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.188994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.189020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.189215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.189243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.189441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.189469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.189703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.189728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.189923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.189952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.190146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.190173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.190372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.190413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.190637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.190662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.190825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.190850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.191053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.191081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.191276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.191304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.191541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.191569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.191740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.191766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.191978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.192008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.192224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.192252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.192574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.192628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.192823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.192848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.193032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.193058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.193260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.193289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.193505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.193533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.063 [2024-07-14 01:20:23.193735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.063 [2024-07-14 01:20:23.193759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.063 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.193943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.193968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.194155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.194189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.194384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.194413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.194726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.194785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.195002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.195028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.195228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.195255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.195471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.195499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.195669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.195694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.195843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.195873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.196104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.196132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.196371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.196399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.196584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.196612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.196806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.196833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.197033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.197067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.197259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.197299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.197597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.197625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.197854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.197885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.198089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.198119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.198320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.198348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.198590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.198617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.198813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.198839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.199035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.199061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.199240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.199267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.199506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.199534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.199752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.199778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.199953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.199978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.200176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.200217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.200585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.200636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.200832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.200861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.201095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.201132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.201532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.201563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.201738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.201764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.201970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.201996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.202193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.202221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.202440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.202468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.202727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.202756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.202939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.202965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.203124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.203152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.203364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.203392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.203733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.203783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.203981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.204014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.204250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.204278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.204535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.204587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.204782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.204807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.204959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.204985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.205186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.205251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.205500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.205528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.064 qpair failed and we were unable to recover it. 00:34:34.064 [2024-07-14 01:20:23.205747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.064 [2024-07-14 01:20:23.205772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.206043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.206072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.206284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.206313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.206557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.206585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.206777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.206802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.206997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.207023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.207206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.207231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.207433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.207459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.207639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.207664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.207843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.207876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.208057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.208083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.208261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.208286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.208441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.208466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.208640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.208665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.208808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.208833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.209020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.209046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.209197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.209222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.209390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.209416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.209599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.209624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.209824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.209852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.210078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.210104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.210307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.210332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.210511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.210537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.210768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.210796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.210970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.211005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.211210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.211236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.211438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.211466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.211712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.211740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.211948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.211975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.212153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.212178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.212333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.212358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.212515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.212541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.212741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.212767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.212931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.212956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.213160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.213185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.213383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.213409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.213601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.213626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.213784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.213810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.213995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.214021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.214195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.214220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.214389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.214414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.214611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.214636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.214789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.214814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.215005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.215033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.215216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.215242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.215404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.215430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.215635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.215660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.215856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.215900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.216120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.216146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.216353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.216383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.216563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.216588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.216761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.216786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.216962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.216989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.217163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.217189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.217431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.217456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.217664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.065 [2024-07-14 01:20:23.217689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.065 qpair failed and we were unable to recover it. 00:34:34.065 [2024-07-14 01:20:23.217913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.217942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.218180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.218205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.218382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.218407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.218559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.218584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.218763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.218789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.218998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.219024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.219240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.219268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.219471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.219497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.219644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.219669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.219832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.219859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.220064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.220090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.220239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.220265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.220438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.220463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.220641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.220666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.220923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.220948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.221102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.221128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.221277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.221303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.221512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.221537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.221713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.221738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.221922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.221948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.222143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.222173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.222347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.222373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.222560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.222585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.222763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.222788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.222972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.222998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.223202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.223243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.223444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.223469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.223667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.223693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.223925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.223954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.224148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.224173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.224359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.224385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.224582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.224610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.224784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.224809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.224994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.225020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.225210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.225235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.225433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.225459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.225690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.225718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.225906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.225935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.226110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.226135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.226286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.226312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.226509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.226535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.226710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.226735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.226901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.226930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.227102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.227127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.227325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.227350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.227586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.227614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.227811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.227839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.228081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.228111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.228307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.228332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.228503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.228545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.228768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.228796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.228999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.229024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.229251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.229279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.229474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.229499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.229676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.229717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.229927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.229953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.230102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.230129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.230349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.066 [2024-07-14 01:20:23.230374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.066 qpair failed and we were unable to recover it. 00:34:34.066 [2024-07-14 01:20:23.230554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.230580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.230744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.230769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.230987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.231016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.231216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.231245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.231439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.231464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.231612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.231637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.231814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.231841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.232019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.232045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.232226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.232251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.232401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.232427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.232577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.232602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.232804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.232829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.233013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.233039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.233217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.233242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.233387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.233413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.233616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.233641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.233794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.233820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.234034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.234060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.234271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.234296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.234507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.234532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.234707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.234732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.234910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.234936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.235087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.235113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.235265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.235290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.235475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.235502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.235724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.235749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.235929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.235955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.236161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.236189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.236380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.236405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.236541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.236567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.236758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.236784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.236927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.236953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.237152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.237177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.237393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.237421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.237613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.237638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.237849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.237882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.238108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.238137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.238354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.238379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.238579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.238604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.238786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.238811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.238960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.238986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.239192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.239217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.239429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.239454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.239657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.239683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.239888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.239914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.240144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.240172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.240408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.240433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.240635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.240661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.240907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.240933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.067 qpair failed and we were unable to recover it. 00:34:34.067 [2024-07-14 01:20:23.241135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.067 [2024-07-14 01:20:23.241160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.241324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.241349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.241513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.241540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.241716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.241742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.241919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.241948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.242135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.242161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.242337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.242363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.242559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.242589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.242792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.242824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.243065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.243091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.243242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.243267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.243447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.243472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.243646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.243671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.243850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.243884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.244034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.244059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.244236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.244261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.244417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.244442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.244617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.244646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.244838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.244863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.245056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.245081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.245253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.245279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.245452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.245477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.245628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.245654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.245807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.245833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.246014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.246040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.246243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.246268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.246446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.246471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.246621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.246647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.246827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.246853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.247041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.247067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.247256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.247281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.247435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.247461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.247639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.247664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.247836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.247873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.248069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.248094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.248258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.248291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.248503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.248528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.248696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.248721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.248903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.248929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.249099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.249124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.249343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.249371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.249564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.249592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.249781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.249807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.249942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.249968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.250160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.250188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.250364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.250389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.250563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.250588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.250729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.250755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.250928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.250953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.251134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.251159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.251327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.251355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.251521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.251546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.251720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.251745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.251924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.251950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.252122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.252147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.252316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.068 [2024-07-14 01:20:23.252344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.068 qpair failed and we were unable to recover it. 00:34:34.068 [2024-07-14 01:20:23.252527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.252553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.252722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.252747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.252908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.252934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.253077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.253102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.253316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.253344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.253517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.253542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.253693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.253718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.253933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.253959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.254133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.254158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.254337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.254379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.254543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.254571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.254759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.254784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.254940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.254966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.255170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.255195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.255399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.255424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.255593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.255619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.255774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.255800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.255973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.255998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.256171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.256197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.256376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.256403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.256621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.256662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.256856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.256894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.257106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.257134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.257346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.257390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.257592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.257636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.257840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.257874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.258057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.258094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.258297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.258341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.258553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.258598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.258785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.258812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.259017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.259045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.259322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.259368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.259607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.259660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.259877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.259920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.260105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.260133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.260360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.260404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.260619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.260667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.260874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.260901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.261082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.261108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.261319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.261364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.261540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.261567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.261713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.261739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.261936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.261980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.262183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.262225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.262455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.262498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.262651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.262678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.262861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.262893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.263100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.263129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.263362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.263404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.263664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.263711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.263871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.263898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.264099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.264142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.264319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.264363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.264651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.264710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.264887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.264914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.265090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.265133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.265297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.265340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.265488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.069 [2024-07-14 01:20:23.265515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.069 qpair failed and we were unable to recover it. 00:34:34.069 [2024-07-14 01:20:23.265688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.265714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.265931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.265957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.266186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.266229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.266445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.266488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.266666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.266693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.266897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.266926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.267123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.267167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.267399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.267441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.267640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.267669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.267886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.267912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.268113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.268142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.268388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.268430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.268666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.268710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.268895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.268924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.269131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.269175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.269413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.269460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.269665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.269708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.269888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.269926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.270125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.270170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.270376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.270419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.270630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.270673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.270857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.270889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.271034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.271060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.271289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.271331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.271503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.271547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.271745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.271789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.271999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.272025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.272207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.272250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.272484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.272527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.272714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.272739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.272943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.272986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.273215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.273258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.273489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.273531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.273719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.273746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.273948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.273993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.274178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.274221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.274420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.274463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.274635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.274661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.274837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.274863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.275110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.275139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.275356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.275400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.275630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.275673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.275858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.275889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.276101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.276127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.276302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.276331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.276547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.276590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.276764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.276789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.276973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.277000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.277205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.277248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.277445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.277475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.277672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.277697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.277874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.277901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.278044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.278070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.278271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.278314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.070 qpair failed and we were unable to recover it. 00:34:34.070 [2024-07-14 01:20:23.278548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.070 [2024-07-14 01:20:23.278592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.278793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.278822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.279031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.279057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.279262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.279305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.279508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.279551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.279732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.279757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.279939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.279966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.280190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.280232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.280465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.280509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.280665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.280692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.280873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.280899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.281098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.281123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.281317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.281346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.281557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.281600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.281770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.281796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.281973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.281999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.282168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.282212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.282441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.282485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.282664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.282690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.282864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.282896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.283098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.283127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.283373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.283417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.283618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.283662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.283872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.283899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.284063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.284089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.284279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.284328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.284555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.284599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.284776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.284802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.284994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.285020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.285214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.285256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.285461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.285503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.285669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.285711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.285890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.285919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.286110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.286140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.286395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.286439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.286623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.286666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.286874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.286900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.287078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.287104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.287307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.287351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.287595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.287638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.287840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.287871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.288079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.288108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.288358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.288388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.288641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.288683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.288863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.288894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.289115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.289144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.289353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.289382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.289591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.289619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.071 [2024-07-14 01:20:23.289821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.071 [2024-07-14 01:20:23.289847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.071 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.290063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.290089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.290289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.290332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.290534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.290579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.290748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.290774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.290951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.290977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.291183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.291226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.291435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.291479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.291710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.291753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.291953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.291997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.292190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.292219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.292431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.292474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.292709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.292753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.292918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.292948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.293135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.293179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.293410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.293454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.293630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.293656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.293836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.293864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.294059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.294103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.294338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.294381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.294570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.294617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.294775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.294801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.295026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.295070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.295270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.295313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.295511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.295540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.295755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.295780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.295985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.296028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.296235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.296278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.296485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.296527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.296703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.296729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.296923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.296952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.297173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.297216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.297425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.297470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.297661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.297687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.297889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.297915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.298121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.298147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.298324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.298367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.298572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.298614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.298766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.298792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.299019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.299063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.299227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.299269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.299470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.299513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.299713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.299739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.299932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.299978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.300153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.300196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.300405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.300449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.300647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.300673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.300854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.300886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.301091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.301134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.301333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.301377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.301552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.301601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.301779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.301804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.301976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.302002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.302202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.302245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.302477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.302520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.302694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.302721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.302897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.302941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.072 [2024-07-14 01:20:23.303144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.072 [2024-07-14 01:20:23.303187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.072 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.303378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.303421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.303566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.303593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.303797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.303829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.304039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.304068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.304276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.304319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.304523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.304568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.304769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.304795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.304963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.305006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.305241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.305284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.305487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.305530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.305675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.305701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.305881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.305924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.306127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.306170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.306373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.306415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.306600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.306627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.306780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.306806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.307009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.307054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.307253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.307296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.307529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.307573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.307747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.307773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.307970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.308014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.308214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.308257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.308461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.308506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.308692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.308718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.308942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.308985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.309185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.309228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.309442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.309468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.309671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.309697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.309895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.309921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.310127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.310171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.310375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.310417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.310625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.310652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.310837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.310864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.311046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.311088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.311316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.311359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.311561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.311606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.311782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.311808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.312033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.312077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.312314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.312357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.312584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.312627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.312777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.312804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.313026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.313070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.313304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.313350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.313581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.313625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.313777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.313803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.314008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.314052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.314221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.314264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.314466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.314509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.314708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.314733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.314880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.314906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.315128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.315155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.315359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.315402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.315607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.315651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.315861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.315897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.316071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.316114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.073 qpair failed and we were unable to recover it. 00:34:34.073 [2024-07-14 01:20:23.316310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.073 [2024-07-14 01:20:23.316352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.316597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.316641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.316839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.316870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.317074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.317118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.317323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.317368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.317577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.317620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.317792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.317818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.318014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.318041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.318238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.318281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.318490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.318516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.318700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.318727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.318951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.318996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.319222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.319265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.319477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.319520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.319728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.319754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.319931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.319958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.320187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.320230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.320470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.320514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.320657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.320683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.320839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.320871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.321056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.321083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.321310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.321354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.321556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.321586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.321806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.321831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.322006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.322049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.322281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.322325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.322531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.322573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.322751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.322780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.323012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.323054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.323260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.323303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.323529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.323572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.323756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.323781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.323947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.324002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.324227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.324270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.324475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.324519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.324731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.324756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.324985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.325029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.325238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.325281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.325506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.325549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.325767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.325792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.326015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.326059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.326296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.326339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.326547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.326590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.326746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.326773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.326998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.327042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.327273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.327316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.327552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.327595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.327802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.327827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.328065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.328108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.074 qpair failed and we were unable to recover it. 00:34:34.074 [2024-07-14 01:20:23.328346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.074 [2024-07-14 01:20:23.328389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.328620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.328664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.328875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.328901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.329077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.329103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.329309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.329337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.329560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.329604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.329810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.329836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.330021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.330047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.330257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.330300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.330509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.330551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.330750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.330776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.330956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.330983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.331216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.331258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.331455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.331485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.331675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.331701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.331882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.331918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.332125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.332154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.332349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.332394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.332628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.332675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.332846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.332877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.333056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.333082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.333254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.333298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.333466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.333509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.333687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.333713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.333928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.333954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.334154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.334197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.334393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.334437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.334642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.334668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.334875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.334902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.335128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.335170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.335385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.335427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.335621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.335664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.335824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.335850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.336083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.336127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.336355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.336398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.336628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.336672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.336826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.336852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.337094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.337137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.337382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.337426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.337622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.337665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.337889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.337916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.338094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.338120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.338301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.338344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.338546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.338590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.338799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.338825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.339015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.339043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.339208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.339251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.339481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.339524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.339705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.339730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.339956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.339999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.340244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.340286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.340517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.340560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.340731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.340756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.340984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.341028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.341204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.341252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.341444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.341488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.341689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.341715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.075 [2024-07-14 01:20:23.341967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.075 [2024-07-14 01:20:23.341996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.075 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.342203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.342251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.342429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.342475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.342680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.342705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.342882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.342909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.343107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.343151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.343394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.343436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.343616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.343661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.343871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.343897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.344104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.344132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.344355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.344398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.344598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.344642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.344826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.344852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.345052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.345099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.345273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.345318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.345558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.345602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.345805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.345831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.346046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.346090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.346264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.346306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.346535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.346578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.346783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.346808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.347015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.347060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.347286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.347330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.347499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.347541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.347751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.347777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.347969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.348015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.348185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.348231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.348435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.348478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.348721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.348766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.348993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.349037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.349238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.349282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.349484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.349527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.349706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.349732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.349891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.349917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.350118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.350161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.350389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.350431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.350644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.350687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.350839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.350870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.351069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.351112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.351313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.351356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.351584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.351626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.351839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.351876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.352066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.352092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.352289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.352331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.352532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.352575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.352757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.352782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.352955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.352982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.353189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.353218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.353468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.353510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.353747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.353791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.354006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.354032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.354266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.354308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.354511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.354554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.354732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.354758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.076 [2024-07-14 01:20:23.354957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.076 [2024-07-14 01:20:23.355006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.076 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.355213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.355255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.355451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.355480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.355709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.355735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.355884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.355910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.356098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.356125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.356307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.356350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.356554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.356597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.356798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.356823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.357038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.357081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.357260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.357306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.357503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.357546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.357730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.357756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.357952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.357999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.358204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.358247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.358447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.358491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.358700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.358726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.358916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.358945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.359131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.359174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.359366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.359408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.359601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.359630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.359856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.359888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.360128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.360171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.360378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.360422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.360610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.360653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.360802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.360828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.361037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.361081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.361284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.361331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.361563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.361606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.361814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.361839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.362025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.362051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.362255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.362299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.362501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.362544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.362732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.362758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.362950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.362997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.363197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.363241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.363453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.363479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.363655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.363681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.363887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.363913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.364095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.364139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.364329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.364373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.364579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.364622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.364799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.364824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.365052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.365096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.365302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.365345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.365511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.365554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.365753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.365779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.366003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.366047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.366250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.077 [2024-07-14 01:20:23.366293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.077 qpair failed and we were unable to recover it. 00:34:34.077 [2024-07-14 01:20:23.366495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.366538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.366713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.366739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.366933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.366976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.367144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.367186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.367416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.367460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.367674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.367700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.367880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.367906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.368075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.368118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.368365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.368409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.368648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.368691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.368897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.368924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.369140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.369168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.369386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.369430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.369600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.369644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.369824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.369850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.370035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.370062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.370265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.370308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.370487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.370533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.370732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.370761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.370942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.370968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.371198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.371241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.371448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.371491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.371698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.371742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.371939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.371983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.372158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.372202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.372434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.372477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.372685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.372710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.372919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.372945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.373157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.373186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.373400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.373428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.373636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.373679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.373829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.373855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.374046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.374072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.374233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.374277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.374475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.374519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.374727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.374753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.374946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.374990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.375164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.375209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.375405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.375448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.375661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.375686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.375863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.375895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.376077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.376102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.376312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.376356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.376550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.376579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.376775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.376800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.376998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.377037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.377245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.377277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.377476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.377505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.377700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.377728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.377888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.377931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.378081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.378107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.378324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.378366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.378561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.378589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.378756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.378786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.378958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.378984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.379169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.379198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.379386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.379414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.379631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.379659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.078 [2024-07-14 01:20:23.379815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.078 [2024-07-14 01:20:23.379843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.078 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.380059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.380084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.380301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.380329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.380502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.380530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.380689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.380716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.380922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.380962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.381165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.381209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.381420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.381463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.381666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.381710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.381863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.381896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.382100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.382126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.382358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.382400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.382633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.382676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.382835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.382860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.383047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.383073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.383277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.383322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.383566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.383608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.383822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.383847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.384034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.384060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.384259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.384303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.384536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.384578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.384760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.384786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.384990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.385017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.385218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.385265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.385466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.385509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.385740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.385783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.385984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.386028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.386197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.386246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.386480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.386523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.386733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.386759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.386962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.387007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.387206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.387248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.387453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.387496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.387710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.387736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.387898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.387926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.388155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.388199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.388402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.388446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.388620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.388662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.388836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.388861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.389042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.389068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.389296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.389340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.389544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.389588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.389731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.389757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.389986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.390030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.390261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.390304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.390502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.390545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.390752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.390777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.390947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.390992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.391163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.391206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.391422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.391464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.391669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.391695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.391876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.391902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.392084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.392110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.392304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.392346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.392537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.392564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.392718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.392745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.392992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.393035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.393213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.393243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.393410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.393439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.393689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.393717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.393930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.079 [2024-07-14 01:20:23.393957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.079 qpair failed and we were unable to recover it. 00:34:34.079 [2024-07-14 01:20:23.394188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.394216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.394387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.394427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.394623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.394651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.394849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.394883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.395086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.395114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.395333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.395361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.395584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.395611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.395810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.395835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.396024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.396050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.396249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.396277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.396481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.396509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.396716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.396744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.396974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.397000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.397207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.397264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.397469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.397511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.397709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.397752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.397911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.397937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.398133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.398176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.398379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.398424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.398626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.398669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.398846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.398888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.399122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.399165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.399392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.399423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.399599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.399627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.399827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.399852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.400043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.400068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.400244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.400272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.400486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.400527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.400783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.400811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.401014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.401040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.401205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.401233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.401459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.401487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.401655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.401683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.401848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.401882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.402107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.402132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.402331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.402359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.402529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.402557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.402767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.402795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.402991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.403018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.403213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.403241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.403428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.403456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.403624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.403652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.403877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.403919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.404078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.404110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.404291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.404316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.404549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.404577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.404816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.404844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.405047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.405072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.405337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.405367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.405585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.405613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.405774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.080 [2024-07-14 01:20:23.405802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.080 qpair failed and we were unable to recover it. 00:34:34.080 [2024-07-14 01:20:23.405992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.406018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.406218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.406245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.406448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.406476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.406674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.406702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.406885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.406911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.407114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.407157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.407382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.407411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.407607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.407635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.407833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.407861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.408066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.408091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.408245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.408270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.408426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.408463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.408699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.408727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.408894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.408919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.409089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.409113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.409325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.409362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.409552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.409580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.409744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.409772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.409979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.410005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.410177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.410213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.410371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.410396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.410600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.410628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.410883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.410925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.411083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.411112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.411287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.411315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.411503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.411531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.411716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.411744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.411956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.411982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.412128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.412169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.412330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.412358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.412554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.412584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.412772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.412800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.413000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.413025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.413240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.413268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.413440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.413467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.413660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.413687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.413883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.413924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.414070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.414096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.414292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.414320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.414518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.414546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.414736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.414764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.414962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.414987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.415187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.415215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.415392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.415418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.415616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.415644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.415814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.415842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.416043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.416069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.416219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.416243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.416466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.416493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.416680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.416705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.416882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.416917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.417119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.417147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.417349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.417374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.417577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.417602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.417838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.417874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.081 qpair failed and we were unable to recover it. 00:34:34.081 [2024-07-14 01:20:23.418072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.081 [2024-07-14 01:20:23.418097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.418273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.418302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.418519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.418546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.418768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.418793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.419006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.419031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.419252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.419280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.419506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.419531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.419758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.419785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.420008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.420036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.420246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.420271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.420438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.420467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.420700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.420728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.420959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.420985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.421153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.421182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.421404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.421432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.421633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.421658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.421810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.421834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.422014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.422040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.422213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.422237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.422382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.422407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.422587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.422612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.422828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.422853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.423043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.423068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.423315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.423343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.423521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.423546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.423741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.423769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.423965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.423994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.424169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.424194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.424373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.424399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.424605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.424633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.424822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.424850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.425045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.425070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.425243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.425267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.425439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.425463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.425622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.425650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.425839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.425874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.426101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.426126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.426300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.426328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.426549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.426577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.426776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.426801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.427020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.427048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.427273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.427301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.427507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.427532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.427685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.427710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.427953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.427979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.428178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.428203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.428432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.428460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.428649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.428676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.428876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.428901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.429103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.429130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.429339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.429364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.429565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.429590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.429772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.429797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.429974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.430000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.430200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.430225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.430395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.430423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.430621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.430647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.430852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.430886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.082 [2024-07-14 01:20:23.431064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.082 [2024-07-14 01:20:23.431088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.082 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.431309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.431336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.431563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.431588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.431744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.431769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.431922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.431948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.432124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.432154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.432388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.432417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.432610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.432638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.432837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.432862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.433072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.433097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.433309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.433337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.433531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.433556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.433701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.433726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.433925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.433954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.434122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.434147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.434355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.434383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.434607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.434632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.434841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.434881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.435082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.435107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.435352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.435377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.435549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.435575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.435772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.435800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.436020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.436046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.436224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.436249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.436445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.436472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.436694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.436722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.436940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.436965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.437128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.437156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.437344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.437372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.437567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.437592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.437808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.437835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.438043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.438068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.438267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.438298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.438463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.438491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.438718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.438746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.438943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.438968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.439143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.439168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.439366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.439396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.439590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.439615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.439813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.439840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.440074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.440112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.440281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.440309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.440517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.440543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.440689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.440716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.440936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.440963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.441117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.441143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.441326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.441351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.441530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.441556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.441736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.441762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.441961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.441987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.442159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.442185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.442334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.442360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.083 [2024-07-14 01:20:23.442506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.083 [2024-07-14 01:20:23.442533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.083 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.442708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.442734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.443052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.443083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.443281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.443307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.443495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.443521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.443725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.443751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.443932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.443958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.444146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.444172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.444372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.444397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.444572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.444598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.444800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.444828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.445000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.445026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.445226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.445255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.445476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.445504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.445763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.445791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.446013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.446039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.446215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.446240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.446448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.446473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.446647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.446673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.446844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.446875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.447058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.447083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.447294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.447319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.447471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.447497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.447653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.447679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.447886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.447928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.448100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.448125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.448303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.448328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.448528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.448553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.448735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.448760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.448907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.448934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.449136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.449161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.449338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.449363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.449565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.449590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.449786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.449815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.450022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.450049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.450224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.450250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.450427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.450453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.450684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.450713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.450935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.450961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.451139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.451165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.451343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.451369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.451548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.451573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.451775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.451800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.451980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.452008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.452187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.452212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.452413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.452438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.452613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.452639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.452879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.452926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.453103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.453128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.453307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.453333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.453486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.453511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.453687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.453713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.453892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.453919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.084 [2024-07-14 01:20:23.454095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.084 [2024-07-14 01:20:23.454120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.084 qpair failed and we were unable to recover it. 00:34:34.085 [2024-07-14 01:20:23.454291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.085 [2024-07-14 01:20:23.454316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.085 qpair failed and we were unable to recover it. 00:34:34.085 [2024-07-14 01:20:23.454543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.085 [2024-07-14 01:20:23.454571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.085 qpair failed and we were unable to recover it. 00:34:34.085 [2024-07-14 01:20:23.454769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.085 [2024-07-14 01:20:23.454794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.085 qpair failed and we were unable to recover it. 00:34:34.085 [2024-07-14 01:20:23.454969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.085 [2024-07-14 01:20:23.454996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.085 qpair failed and we were unable to recover it. 00:34:34.085 [2024-07-14 01:20:23.455203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.085 [2024-07-14 01:20:23.455228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.085 qpair failed and we were unable to recover it. 00:34:34.085 [2024-07-14 01:20:23.455409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.085 [2024-07-14 01:20:23.455434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.085 qpair failed and we were unable to recover it. 00:34:34.085 [2024-07-14 01:20:23.455612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.085 [2024-07-14 01:20:23.455637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.085 qpair failed and we were unable to recover it. 00:34:34.085 [2024-07-14 01:20:23.455876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.085 [2024-07-14 01:20:23.455920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.085 qpair failed and we were unable to recover it. 00:34:34.085 [2024-07-14 01:20:23.456160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.085 [2024-07-14 01:20:23.456202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.085 qpair failed and we were unable to recover it. 00:34:34.085 [2024-07-14 01:20:23.456457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.085 [2024-07-14 01:20:23.456484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.085 qpair failed and we were unable to recover it. 00:34:34.085 [2024-07-14 01:20:23.456642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.085 [2024-07-14 01:20:23.456668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.085 qpair failed and we were unable to recover it. 00:34:34.085 [2024-07-14 01:20:23.456851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.085 [2024-07-14 01:20:23.456895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.085 qpair failed and we were unable to recover it. 00:34:34.085 [2024-07-14 01:20:23.457077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.085 [2024-07-14 01:20:23.457102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.085 qpair failed and we were unable to recover it. 00:34:34.085 [2024-07-14 01:20:23.457281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.085 [2024-07-14 01:20:23.457306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.085 qpair failed and we were unable to recover it. 00:34:34.085 [2024-07-14 01:20:23.457458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.085 [2024-07-14 01:20:23.457483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.085 qpair failed and we were unable to recover it. 00:34:34.085 [2024-07-14 01:20:23.457657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.085 [2024-07-14 01:20:23.457695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.085 qpair failed and we were unable to recover it. 00:34:34.085 [2024-07-14 01:20:23.457945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.085 [2024-07-14 01:20:23.457974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.085 qpair failed and we were unable to recover it. 00:34:34.359 [2024-07-14 01:20:23.458180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.359 [2024-07-14 01:20:23.458208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.359 qpair failed and we were unable to recover it. 00:34:34.359 [2024-07-14 01:20:23.458362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.458388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.458546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.458572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.458746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.458775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.458965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.458991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.459192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.459222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.459439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.459465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.459644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.459673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.459937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.459963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.460140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.460165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.460344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.460371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.460549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.460575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.460747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.460774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.460955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.460981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.461160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.461185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.461359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.461384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.461591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.461621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.461819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.461847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.462046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.462072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.462221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.462247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.462453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.462478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.462630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.462656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.462835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.462860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.463052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.463077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.463277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.463303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.463484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.463509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.463661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.463686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.463862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.463896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.464052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.464078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.464281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.464307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.464510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.464536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.464691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.464716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.464974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.465001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.465174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.465200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.465396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.465425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.465621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.465649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.465812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.465837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.465997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.466025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.466207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.466232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.466407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.466433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.466635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.466660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.360 qpair failed and we were unable to recover it. 00:34:34.360 [2024-07-14 01:20:23.466837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.360 [2024-07-14 01:20:23.466864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.467052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.467077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.467260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.467286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.467462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.467487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.467664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.467690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.467875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.467901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.468108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.468133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.468304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.468330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.468510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.468535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.468761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.468789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.468984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.469012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.469215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.469241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.469420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.469445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.469602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.469628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.469778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.469805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.470005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.470037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.470266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.470294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.470510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.470535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.470734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.470764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.470953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.470980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.471181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.471206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.471384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.471411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.471559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.471586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.471762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.471788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.471991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.472018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.472167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.472192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.472365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.472392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.472547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.472573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.472776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.472801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.472962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.472989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.473167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.473193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.473342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.473367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.473542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.473567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.473795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.473823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.474033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.474061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.474292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.474320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.474754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.474807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.475008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.475034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.475203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.475229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.475405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.475431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.475606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.361 [2024-07-14 01:20:23.475632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.361 qpair failed and we were unable to recover it. 00:34:34.361 [2024-07-14 01:20:23.475803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.475828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.476017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.476043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.476251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.476276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.476448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.476473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.476679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.476704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.476915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.476942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.477120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.477146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.477300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.477325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.477503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.477529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.477725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.477751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.477930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.477956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.478159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.478184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.478384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.478410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.478586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.478611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.478806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.478838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.479019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.479044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.479246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.479272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.479478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.479503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.479694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.479722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.479936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.479962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.480139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.480165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.480337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.480362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.480539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.480565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.480766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.480792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.480965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.480991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.481193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.481218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.481363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.481389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.481541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.481567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.481780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.481823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.482026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.482052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.482206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.482231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.482404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.482429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.482596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.482621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.482793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.482818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.482992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.483018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.483216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.483241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.483420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.483445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.483623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.483648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.483875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.483903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.484096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.484121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.484325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.362 [2024-07-14 01:20:23.484350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.362 qpair failed and we were unable to recover it. 00:34:34.362 [2024-07-14 01:20:23.484535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.484562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.484759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.484784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.484963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.484990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.485143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.485169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.485322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.485347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.485525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.485551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.485748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.485774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.485946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.485972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.486175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.486200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.486373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.486398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.486575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.486601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.486806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.486832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.486989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.487016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.487190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.487219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.487396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.487422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.487570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.487595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.487796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.487823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.488072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.488101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.488300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.488328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.488482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.488510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.488751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.488779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.488995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.489021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.489202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.489228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.489436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.489462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.489640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.489665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.489841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.489873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.490028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.490053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.490206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.490232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.490383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.490409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.490592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.490617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.490847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.490881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.491113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.491138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.491322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.491347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.491521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.491546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.491687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.491712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.491889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.491915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.492126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.492151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.492295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.492320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.492493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.492519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.492699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.363 [2024-07-14 01:20:23.492724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.363 qpair failed and we were unable to recover it. 00:34:34.363 [2024-07-14 01:20:23.492920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.492946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.493128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.493153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.493373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.493401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.493590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.493619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.493837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.493871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.494097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.494126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.494345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.494373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.494559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.494587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.494849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.494884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.495095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.495120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.495290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.495315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.495488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.495513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.495692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.495718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.495874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.495903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.496077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.496103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.496253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.496278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.496424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.496450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.496656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.496681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.496854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.496892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.497068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.497093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.497267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.497294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.497502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.497527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.497710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.497736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.497912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.497938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.498144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.498169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.498350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.498375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.498548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.498573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.498805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.498834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.499064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.499090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.499269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.499294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.499501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.499529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.499728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.499757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.499978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.500004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.364 [2024-07-14 01:20:23.500183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.364 [2024-07-14 01:20:23.500209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.364 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.500415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.500440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.500641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.500667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.500822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.500847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.501009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.501034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.501236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.501262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.501436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.501462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.501648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.501673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.501910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.501937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.502141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.502167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.502369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.502395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.502574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.502599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.502796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.502822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.503031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.503057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.503228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.503253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.503428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.503454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.503604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.503629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.503836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.503871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.504093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.504118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.504295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.504320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.504520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.504552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.504805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.504833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.505078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.505134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.505351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.505396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.505601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.505645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.505853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.505885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.506090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.506115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.506319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.506363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.506681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.506732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.506908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.506934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.507137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.507180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.507432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.507475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.507712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.507754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.507949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.507978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.508233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.508277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.508494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.508536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.508739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.508765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.508991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.509034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.509231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.509275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.365 [2024-07-14 01:20:23.509479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.365 [2024-07-14 01:20:23.509522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.365 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.509702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.509727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.509889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.509916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.510118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.510161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.510539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.510599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.510775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.510801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.511005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.511049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.511366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.511431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.511636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.511680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.511833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.511858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.512009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.512034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.512204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.512246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.512453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.512498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.512708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.512734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.512887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.512913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.513078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.513106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.513318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.513360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.513561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.513590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.513807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.513833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.514045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.514088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.514317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.514360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.514560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.514608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.514784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.514810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.515036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.515079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.515273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.515301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.515538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.515580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.515793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.515818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.516025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.516050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.516268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.516311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.516508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.516551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.516724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.516750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.517016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.517059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.517288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.517330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.517526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.517554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.517748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.517774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.518047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.518091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.518287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.518316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.518532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.518575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.518781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.518808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.519024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.519068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.519277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.519320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.366 qpair failed and we were unable to recover it. 00:34:34.366 [2024-07-14 01:20:23.519518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.366 [2024-07-14 01:20:23.519547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.519714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.519740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.519927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.519971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.520181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.520207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.520410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.520454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.520623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.520650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.520835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.520861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.521136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.521179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.521386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.521415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.521617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.521643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.521825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.521851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.522012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.522054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.522245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.522273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.522472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.522500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.522688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.522716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.522926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.522951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.523124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.523169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.523416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.523459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.523663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.523706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.523870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.523897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.524103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.524145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.524325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.524368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.524601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.524643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.524846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.524886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.525061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.525103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.525315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.525342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.525570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.525613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.525757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.525784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.525984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.526027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.526173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.526200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.526407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.526434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.526596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.526622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.526836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.526861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.527063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.527110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.527283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.527326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.527533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.527576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.527751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.527778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.527975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.528018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.528216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.528245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.528466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.528508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.528713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.367 [2024-07-14 01:20:23.528739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.367 qpair failed and we were unable to recover it. 00:34:34.367 [2024-07-14 01:20:23.528933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.528979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.529173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.529216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.529415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.529443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.529633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.529659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.529845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.529876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.530050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.530093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.530326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.530372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.530566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.530609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.530782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.530808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.531010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.531053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.531226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.531269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.531443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.531487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.531694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.531720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.531945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.531988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.532188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.532217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.532467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.532510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.532685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.532710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.532887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.532914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.533137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.533179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.533392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.533435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.533636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.533679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.533884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.533910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.534139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.534182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.534333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.534359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.534566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.534608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.534759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.534784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.534989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.535032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.535237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.535280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.535480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.535524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.535699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.535725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.535948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.535992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.536219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.536263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.536497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.536540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.536697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.536723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.536918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.368 [2024-07-14 01:20:23.536948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.368 qpair failed and we were unable to recover it. 00:34:34.368 [2024-07-14 01:20:23.537189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.537233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.537433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.537476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.537653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.537679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.537820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.537846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.538060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.538105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.538272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.538315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.538551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.538595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.538779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.538805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.538974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.539017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.539247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.539290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.539502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.539545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.539747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.539777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.539972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.540017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.540225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.540268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.540500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.540543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.540696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.540721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.540943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.540986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.541198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.541240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.541473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.541514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.541694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.541720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.541938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.541982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.542152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.542196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.542423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.542466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.542647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.542673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.542855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.542887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.543071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.543115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.543311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.543355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.543547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.543590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.543798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.543824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.544030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.544072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.544279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.544323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.544516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.544559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.544764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.544789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.544974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.545000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.545231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.545275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.545441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.545484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.545715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.545758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.545993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.546036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.546242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.546285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.546521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.546564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.369 qpair failed and we were unable to recover it. 00:34:34.369 [2024-07-14 01:20:23.546717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.369 [2024-07-14 01:20:23.546742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.546920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.546947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.547175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.547219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.547397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.547440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.547637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.547681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.547885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.547911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.548082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.548125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.548320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.548364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.548562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.548605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.548790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.548816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.549020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.549065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.549293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.549340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.549578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.549620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.549821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.549847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.550028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.550054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.550263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.550308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.550514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.550557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.550758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.550784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.550983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.551028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.551204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.551250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.551457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.551500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.551654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.551680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.551857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.551889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.552082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.552128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.552357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.552401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.552620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.552664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.552839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.552880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.553089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.553114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.553296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.553340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.553545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.553589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.553769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.553796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.554011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.554055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.554282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.554326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.554539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.554582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.554736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.554762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.554956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.555002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.555237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.555279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.555446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.555490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.555668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.555695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.555898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.555924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.556159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.556203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.370 qpair failed and we were unable to recover it. 00:34:34.370 [2024-07-14 01:20:23.556435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.370 [2024-07-14 01:20:23.556478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.556683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.556725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.556918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.556947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.557164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.557208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.557446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.557490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.557692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.557736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.557933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.557977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.558213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.558256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.558432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.558477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.558676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.558702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.558908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.558940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.559149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.559191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.559391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.559419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.559610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.559654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.559834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.559860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.560048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.560074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.560277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.560321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.560552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.560593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.560765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.560791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.561021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.561064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.561295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.561339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.561544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.561587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.561779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.561805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.561981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.562008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.562219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.562248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.562468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.562510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.562745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.562789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.562983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.563028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.563239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.563282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.563501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.563544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.563744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.563770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.563942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.563985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.564186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.564229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.564439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.564482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.564687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.564730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.564927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.564971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.565205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.565248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.565461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.565505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.565705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.565731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.565921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.565950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.566191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.371 [2024-07-14 01:20:23.566235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.371 qpair failed and we were unable to recover it. 00:34:34.371 [2024-07-14 01:20:23.566433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.566477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.566654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.566679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.566860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.566891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.567069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.567095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.567330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.567373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.567615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.567657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.567861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.567892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.568108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.568133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.568308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.568351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.568549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.568597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.568799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.568825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.569037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.569080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.569284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.569327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.569530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.569575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.569755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.569781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.569975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.570019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.570212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.570255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.570438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.570481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.570662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.570705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.570855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.570886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.571087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.571131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.571326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.571356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.571569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.571611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.571819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.571845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.572061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.572105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.572291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.572334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.572527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.572554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.572730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.572756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.572982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.573024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.573227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.573270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.573474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.573516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.573662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.573687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.573864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.573897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.574098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.574140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.574339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.574383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.372 qpair failed and we were unable to recover it. 00:34:34.372 [2024-07-14 01:20:23.574584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.372 [2024-07-14 01:20:23.574627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.574808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.574834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.575036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.575081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.575285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.575329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.575540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.575567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.575721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.575747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.575942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.575985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.576193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.576237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.576465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.576509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.576666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.576693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.576878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.576905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.577106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.577150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.577350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.577393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.577593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.577637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.577814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.577844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.577997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.578024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.578258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.578301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.578499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.578527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.578718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.578744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.578933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.578977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.579182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.579224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.579402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.579446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.579622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.579648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.579849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.579880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.580116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.580160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.580337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.580380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.580568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.580595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.580776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.580802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.581038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.581083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.581311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.581354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.581551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.581580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.581778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.581804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.582033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.582076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.582282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.582325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.582531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.582574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.582725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.582751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.582980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.583024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.583225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.583278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.583503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.583548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.583757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.583783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.583987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.584030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.373 qpair failed and we were unable to recover it. 00:34:34.373 [2024-07-14 01:20:23.584262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.373 [2024-07-14 01:20:23.584304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.584536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.584579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.584725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.584751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.584977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.585021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.585199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.585242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.585450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.585477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.585635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.585661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.585840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.585872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.586039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.586082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.586248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.586291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.586523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.586567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.586746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.586772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.586999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.587042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.587216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.587264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.587461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.587504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.587675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.587701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.587882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.587909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.588113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.588142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.588388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.588431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.588581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.588607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.588783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.588810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.588983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.589026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.589238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.589283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.589507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.589551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.589752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.589778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.589980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.590024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.590223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.590267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.590497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.590539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.590719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.590745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.590939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.590983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.591162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.591206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.591444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.591488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.591690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.591716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.591916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.591942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.592170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.592213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.592439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.592483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.592692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.592735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.592931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.592960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.593182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.593226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.593393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.593436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.374 [2024-07-14 01:20:23.593643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.374 [2024-07-14 01:20:23.593686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.374 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.593858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.593890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.594063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.594089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.594318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.594360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.594537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.594580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.594753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.594779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.594979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.595005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.595232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.595275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.595506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.595548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.595703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.595729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.595923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.595967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.596170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.596198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.596432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.596458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.596662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.596692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.596872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.596899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.597097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.597141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.597337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.597366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.597582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.597625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.597831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.597857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.598059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.598104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.598335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.598378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.598578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.598621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.598770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.598796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.598990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.599033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.599204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.599247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.599475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.599517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.599727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.599753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.599966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.600009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.600243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.600286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.600430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.600457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.600665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.600690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.600895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.600921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.601124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.601167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.601377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.601420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.601587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.601630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.601830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.601856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.602094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.602137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.602370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.602414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.602615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.602658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.602873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.375 [2024-07-14 01:20:23.602898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.375 qpair failed and we were unable to recover it. 00:34:34.375 [2024-07-14 01:20:23.603077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.603103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.603298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.603342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.603582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.603625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.603799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.603825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.604004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.604031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.604264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.604306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.604504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.604548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.604742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.604768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.604948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.604975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.605201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.605244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.605479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.605523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.605696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.605744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.605936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.605981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.606213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.606262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.606469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.606512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.606693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.606719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.606938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.606981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.607180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.607223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.607426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.607469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.607671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.607714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.607948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.607991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.608191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.608219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.608461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.608504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.608706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.608732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.608891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.608917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.609107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.609151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.609387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.609430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.609616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.609642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.609792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.609819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.610030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.610060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.610303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.610347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.610557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.610601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.610804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.610830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.611067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.611109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.611337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.611380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.611581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.611624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.611825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.611851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.612088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.612131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.612331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.612374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.612602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.612645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.612821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.376 [2024-07-14 01:20:23.612847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.376 qpair failed and we were unable to recover it. 00:34:34.376 [2024-07-14 01:20:23.613093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.613137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.613346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.613390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.613620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.613663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.613839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.613871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.614046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.614073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.614306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.614348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.614536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.614578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.614783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.614809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.615046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.615090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.615265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.615308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.615481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.615525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.615730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.615756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.615922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.615956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.616147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.616191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.616368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.616412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.616611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.616654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.616854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.616896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.617123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.617166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.617382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.617424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.617602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.617647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.617849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.617883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.618086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.618130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.618330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.618374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.618522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.618548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.618731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.618757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.618961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.619004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.619235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.619280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.619486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.619528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.619678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.619703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.619878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.619922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.620159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.620203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.620407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.620452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.620665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.620691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.620842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.620879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.621062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.377 [2024-07-14 01:20:23.621110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.377 qpair failed and we were unable to recover it. 00:34:34.377 [2024-07-14 01:20:23.621289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.621332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.621562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.621605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.621805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.621831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.622016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.622042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.622202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.622229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.622431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.622474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.622679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.622705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.622887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.622915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.623088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.623132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.623304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.623348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.623557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.623601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.623785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.623811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.624027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.624072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.624258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.624302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.624533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.624577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.624754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.624779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.624944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.624972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.625149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.625197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.625369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.625412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.625623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.625649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.625827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.625853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.626072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.626116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.626319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.626363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.626559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.626602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.626801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.626826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.627005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.627050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.627222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.627265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.627489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.627517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.627708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.627734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.627938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.627982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.628153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.628196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.628431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.628475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.628654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.628681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.628877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.628904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.629092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.629141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.629335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.629364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.629589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.629616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.629794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.629820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.630023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.630068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.630240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.630284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.378 qpair failed and we were unable to recover it. 00:34:34.378 [2024-07-14 01:20:23.630476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.378 [2024-07-14 01:20:23.630503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.630682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.630708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.630856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.630888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.631077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.631120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.631358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.631404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.631589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.631616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.631771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.631797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.632011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.632055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.632226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.632269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.632467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.632495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.632665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.632691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.632871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.632898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.633066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.633109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.633342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.633384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.633620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.633663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.633845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.633876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.634073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.634117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.634309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.634338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.634587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.634628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.634807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.634834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.635010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.635036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.635217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.635262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.635466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.635510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.635683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.635729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.635941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.635986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.636164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.636206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.636445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.636474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.636673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.636699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.636876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.636903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.637090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.637139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.637343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.637386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.637563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.637606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.637804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.637829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.637991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.638018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.638218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.638262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.638494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.638537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.638720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.638746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.638951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.639004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.639173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.639216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.639418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.639447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.379 [2024-07-14 01:20:23.639672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.379 [2024-07-14 01:20:23.639714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.379 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.639881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.639908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.640088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.640134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.640316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.640358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.640561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.640607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.640811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.640836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.641001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.641027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.641194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.641238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.641445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.641487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.641703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.641728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.641952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.641997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.642146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.642172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.642407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.642435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.642655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.642698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.642846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.642877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.643091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.643136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.643338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.643381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.643582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.643626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.643775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.643802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.644012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.644056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.644263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.644306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.644545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.644573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.644738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.644764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.644966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.645010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.645214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.645256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.645459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.645503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.645685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.645711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.645888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.645915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.646112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.646141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.646391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.646434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.646635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.646662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.646843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.646877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.647052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.647096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.647288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.647331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.647536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.647579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.647755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.647781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.647960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.648005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.648189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.648232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.648469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.648512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.648718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.648744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.648973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.649018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.380 qpair failed and we were unable to recover it. 00:34:34.380 [2024-07-14 01:20:23.649196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.380 [2024-07-14 01:20:23.649239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.649462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.649492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.649687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.649713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.649926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.649958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.650137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.650185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.650392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.650421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.650634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.650667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.650839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.650871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.651066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.651109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.651273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.651317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.651486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.651529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.651677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.651704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.651918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.651944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.652124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.652152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.652369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.652411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.652588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.652631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.652811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.652838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.653056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.653101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.653293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.653335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.653563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.653592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.653821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.653847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.654037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.654085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.654288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.654331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.654566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.654611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.654813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.654839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.655012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.655038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.655208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.655250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.655473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.655515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.655716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.655742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.655892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.655921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.656105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.656160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.656341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.656386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.656626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.656669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.656846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.656880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.657071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.657115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.657317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.381 [2024-07-14 01:20:23.657363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.381 qpair failed and we were unable to recover it. 00:34:34.381 [2024-07-14 01:20:23.657599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.657642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.657826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.657851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.658050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.658077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.658313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.658342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.658564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.658607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.658766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.658792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.658970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.659014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.659229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.659276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.659487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.659531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.659718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.659744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.659970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.660014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.660183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.660226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.660431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.660474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.660647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.660672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.660850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.660884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.661076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.661121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.661344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.661387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.661625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.661668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.661834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.661860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.662044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.662087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.662325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.662367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.662601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.662645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.662843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.662875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.663028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.663054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.663281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.663324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.663525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.663553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.663745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.663771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.663958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.663985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.664170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.664211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.664442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.664485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.664715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.664759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.664957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.665001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.665190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.665219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.665464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.665507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.665699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.665725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.665919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.665948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.666201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.666243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.666445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.666473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.666691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.666717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.666919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.666963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.382 [2024-07-14 01:20:23.667139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.382 [2024-07-14 01:20:23.667182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.382 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.667415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.667458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.667671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.667697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.667877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.667904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.668079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.668124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.668313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.668357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.668557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.668600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.668799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.668829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.669036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.669082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.669289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.669333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.669563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.669607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.669758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.669785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.669976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.670020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.670186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.670230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.670431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.670476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.670627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.670654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.670856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.670888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.671081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.671125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.671341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.671382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.671589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.671631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.671784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.671810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.671975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.672005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.672213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.672256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.672486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.672530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.672711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.672736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.672960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.673004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.673194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.673237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.673434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.673477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.673685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.673711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.673895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.673924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.674100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.674143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.674339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.674367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.674582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.674625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.674777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.674803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.675023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.675067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.675273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.675317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.675508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.675550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.675722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.675747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.675952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.675996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.676228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.676271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.676478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.383 [2024-07-14 01:20:23.676522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.383 qpair failed and we were unable to recover it. 00:34:34.383 [2024-07-14 01:20:23.676729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.676755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.676980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.677025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.677221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.677264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.677470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.677513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.677690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.677715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.677898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.677925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.678132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.678179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.678412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.678455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.678663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.678690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.678871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.678900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.679052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.679077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.679285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.679313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.679529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.679573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.679774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.679800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.679974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.680000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.680170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.680214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.680390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.680432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.680654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.680681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.680884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.680910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.681106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.681149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.681328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.681375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.681562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.681604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.681785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.681810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.682041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.682084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.682260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.682308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.682501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.682530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.682696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.682722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.682939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.682982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.683237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.683281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.683470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.683513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.683700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.683726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.683922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.683952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.684166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.684208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.684418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.684462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.684665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.684690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.684875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.684902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.685134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.685178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.685383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.685426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.685589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.685633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.685777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.685803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.685951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.384 [2024-07-14 01:20:23.685977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.384 qpair failed and we were unable to recover it. 00:34:34.384 [2024-07-14 01:20:23.686180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.686223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.686408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.686452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.686658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.686701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.686885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.686911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.687112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.687155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.687386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.687434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.687647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.687691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.687841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.687882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.688083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.688112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.688358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.688402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.688573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.688616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.688764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.688790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.688987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.689031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.689214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.689257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.689466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.689510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.689670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.689696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.689849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.689882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.690060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.690102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.690341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.690383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.690600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.690626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.690785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.690812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.691043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.691086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.691292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.691336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.691536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.691579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.691764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.691791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.692012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.692057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.692231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.692275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.692458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.692501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.692673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.692707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.692892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.692928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.693122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.693165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.693361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.693390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.693612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.693654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.693832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.693858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.385 [2024-07-14 01:20:23.694063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.385 [2024-07-14 01:20:23.694107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.385 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.694319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.694363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.694551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.694594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.694797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.694822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.695062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.695107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.695313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.695356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.695556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.695584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.695751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.695777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.696013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.696057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.696300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.696343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.696572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.696614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.696763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.696794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.696970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.697013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.697212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.697256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.697441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.697485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.697687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.697713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.697911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.697954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.698189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.698231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.698463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.698506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.698682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.698708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.698891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.698918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.699128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.699172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.699398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.699442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.699683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.699725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.699889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.699916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.700124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.700168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.700349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.700393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.700632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.700676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.700857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.700889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.701092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.701136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.701337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.701380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.701591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.701635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.701849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.701880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.702070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.702096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.702286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.702330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.702533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.702578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.702771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.702797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.702994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.703042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.703251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.703294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.703502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.703546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.703766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.386 [2024-07-14 01:20:23.703792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.386 qpair failed and we were unable to recover it. 00:34:34.386 [2024-07-14 01:20:23.703993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.704036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.704237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.704281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.704512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.704557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.704750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.704776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.704972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.705016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.705247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.705289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.705461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.705505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.705691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.705718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.705944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.705989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.706218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.706261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.706471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.706518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.706721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.706746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.706890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.706918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.707122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.707171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.707374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.707416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.707613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.707655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.707830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.707855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.708073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.708116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.708318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.708361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.708594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.708638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.708855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.708888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.709124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.709167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.709404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.709448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.709681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.709723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.709905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.709932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.710104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.710147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.710359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.710402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.710611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.710654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.710833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.710858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.711020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.711045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.711242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.711285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.711523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.711566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.711772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.711797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.711975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.712001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.712167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.712210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.712438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.712481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.712662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.712715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.712897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.712934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.713128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.713172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.713400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.387 [2024-07-14 01:20:23.713443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.387 qpair failed and we were unable to recover it. 00:34:34.387 [2024-07-14 01:20:23.713637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.713685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.713889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.713915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.714140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.714184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.714354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.714397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.714587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.714629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.714780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.714805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.715004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.715048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.715246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.715289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.715517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.715560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.715755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.715781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.715990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.716038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.716267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.716311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.716549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.716593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.716793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.716820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.716975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.717005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.717206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.717250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.717473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.717517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.717723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.717749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.717940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.717987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.718192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.718236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.718464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.718507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.718693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.718719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.718938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.718983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.719184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.719210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.719387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.719431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.719636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.719662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.719836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.719861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.720116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.720165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.720372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.720415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.720650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.720693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.720845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.720886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.721050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.721076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.721257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.721299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.721533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.721577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.721758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.721783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.721981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.722029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.722196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.722240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.722475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.722519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.722700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.722727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.722926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.722956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.388 [2024-07-14 01:20:23.723172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.388 [2024-07-14 01:20:23.723215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.388 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.723381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.723424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.723622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.723665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.723882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.723910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.724108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.724137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.724325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.724369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.724574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.724616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.724797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.724823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.724995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.725039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.725270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.725314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.725514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.725562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.725767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.725793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.725970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.726014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.726180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.726224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.726418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.726460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.726641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.726667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.726872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.726898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.727074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.727116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.727324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.727351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.727586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.727629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.727805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.727830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.727991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.728018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.728214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.728258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.728462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.728506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.728682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.728709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.728936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.728979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.729184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.729227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.729463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.729506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.729691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.729716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.729898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.729925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.730127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.730171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.730397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.730440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.730663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.730705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.730917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.730943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.731131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.731158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.731393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.731436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.731605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.731655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.731876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.731903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.732109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.732142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.732314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.732357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.732554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.389 [2024-07-14 01:20:23.732597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.389 qpair failed and we were unable to recover it. 00:34:34.389 [2024-07-14 01:20:23.732744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.732771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.732999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.733043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.733266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.733310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.733487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.733535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.733747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.733773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.733951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.733979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.734146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.734190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.734427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.734470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.734623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.734651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.734833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.734863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.735176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.735218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.735419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.735463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.735665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.735708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.735926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.735970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.736175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.736218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.736420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.736463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.736638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.736664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.736880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.736906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.737107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.737159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.737386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.737429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.737629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.737658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.737882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.737909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.738110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.738153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.738402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.738445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.738613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.738656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.738850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.738882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.739084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.739113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.739335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.739378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.739576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.739618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.739822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.739848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.740036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.740064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.740263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.740305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.740534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.740578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.740739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.740765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.740979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.741005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.741212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.390 [2024-07-14 01:20:23.741254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.390 qpair failed and we were unable to recover it. 00:34:34.390 [2024-07-14 01:20:23.741488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.741532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.741705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.741731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.741962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.742005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.742249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.742292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.742495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.742538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.742765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.742791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.743017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.743060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.743276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.743318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.743498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.743547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.743732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.743758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.743984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.744028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.744213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.744261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.744447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.744490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.744636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.744667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.744879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.744906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.745135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.745180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.745326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.745352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.745552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.745595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.745759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.745787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.745962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.746007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.746203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.746250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.746479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.746522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.746728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.746754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.746953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.746997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.747225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.747269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.747511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.747554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.747736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.747761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.748005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.748048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.748219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.748266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.748493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.748535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.748716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.748742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.748968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.749013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.749190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.749232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.749439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.749481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.749666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.749692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.749890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.749916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.750141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.750185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.750415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.750458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.750665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.750708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.750880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.750906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.751110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.391 [2024-07-14 01:20:23.751153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.391 qpair failed and we were unable to recover it. 00:34:34.391 [2024-07-14 01:20:23.751382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.751425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.751653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.751696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.751889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.751916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.752118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.752143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.752345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.752389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.752604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.752648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.752828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.752854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.753069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.753095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.753323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.753367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.753544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.753589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.753791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.753817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.753993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.754019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.754249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.754291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.754527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.754571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.754758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.754784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.754952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.754995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.755234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.755277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.755428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.755455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.755658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.755687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.392 [2024-07-14 01:20:23.755907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.392 [2024-07-14 01:20:23.755933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.392 qpair failed and we were unable to recover it. 00:34:34.670 [2024-07-14 01:20:23.756114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.670 [2024-07-14 01:20:23.756158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.670 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.756325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.756368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.756602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.756645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.756851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.756893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.757075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.757101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.757301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.757344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.757581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.757625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.757829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.757855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.758068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.758093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.758277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.758320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.758497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.758540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.758691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.758717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.758908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.758953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.759158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.759203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.759404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.759448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.759650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.759693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.759877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.759904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.760051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.760077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.760294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.760339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.760529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.760575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.760746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.760772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.760947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.760973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.761186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.761241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.761472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.761516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.761731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.761775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.761970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.762013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.762210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.762254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.762482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.762525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.762710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.762736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.762933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.762976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.763169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.763211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.763451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.763495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.763701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.763727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.763886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.763913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.764101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.764145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.764351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.764396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.764593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.764622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.764808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.764835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.765068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.765113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.765309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.765353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.671 [2024-07-14 01:20:23.765562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.671 [2024-07-14 01:20:23.765605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.671 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.765787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.765812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.765984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.766029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.766264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.766306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.766514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.766556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.766766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.766792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.766999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.767025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.767239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.767267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.767472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.767498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.767705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.767731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.767950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.767994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.768165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.768207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.768435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.768479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.768701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.768726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.768898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.768927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.769149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.769192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.769405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.769432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.769631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.769657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.769824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.769850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.770051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.770082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.770264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.770309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.770537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.770582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.770724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.770750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.770926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.770956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.771179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.771222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.771428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.771471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.771646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.771671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.771856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.771892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.772107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.772134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.772344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.772386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.772593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.772636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.772820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.772845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.773031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.773074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.773317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.773360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.773569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.773612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.773790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.773816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.774029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.774072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.774274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.774318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.774518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.774562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.774742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.774767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.774962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.672 [2024-07-14 01:20:23.775007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.672 qpair failed and we were unable to recover it. 00:34:34.672 [2024-07-14 01:20:23.775170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.775213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.775418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.775460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.775661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.775705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.775888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.775924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.776145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.776188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.776404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.776447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.776652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.776694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.776848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.776890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.777076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.777103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.777283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.777327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.777533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.777576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.777722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.777749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.777980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.778024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.778197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.778240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.778430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.778473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.778661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.778686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.778863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.778894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.779085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.779110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.779305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.779353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.779527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.779570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.779721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.779748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.779954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.779998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.780176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.780219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.780426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.780468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.780621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.780647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.780799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.780825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.781022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.781065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.781230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.781273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.781472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.781514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.781716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.781741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.781911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.781947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.782129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.782174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.782383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.782426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.782646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.782689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.782844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.782876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.783077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.783120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.783319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.783362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.783559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.783588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.783783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.783809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.784012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.784057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.673 [2024-07-14 01:20:23.784276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.673 [2024-07-14 01:20:23.784321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.673 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.784518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.784547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.784747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.784773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.784973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.785021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.785232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.785276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.785435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.785461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.785663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.785689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.785847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.785879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.786084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.786128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.786352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.786396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.786596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.786639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.786812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.786838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.787078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.787122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.787363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.787407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.787647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.787690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.787841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.787875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.788069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.788112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.788313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.788342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.788562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.788609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.788792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.788818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.788975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.789002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.789194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.789237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.789470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.789514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.789680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.789722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.789957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.790001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.790198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.790242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.790414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.790456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.790664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.790690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.790841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.790873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.791061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.791103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.791299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.791343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.791512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.791555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.791780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.791806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.792011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.792055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.792263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.792306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.792517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.792560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.792788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.792814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.792996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.793022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.793232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.793258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.793452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.793495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.793722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.674 [2024-07-14 01:20:23.793764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.674 qpair failed and we were unable to recover it. 00:34:34.674 [2024-07-14 01:20:23.793963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.793992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.794209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.794251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.794461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.794487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.794663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.794689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.794842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.794874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.795064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.795107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.795280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.795322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.795517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.795560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.795764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.795790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.795962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.795987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.796186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.796228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.796402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.796444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.796662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.796704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.796892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.796934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.797124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.797165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.797357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.797398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.797574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.797601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.797782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.797812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.798024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.798067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.798255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.798296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.798496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.798523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.798711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.798737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.798891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.798918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.799144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.799186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.799384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.799427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.799599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.799640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.799795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.799821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.800008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.800035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.800184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.800210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.800362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.800388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.800586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.800612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.800794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.800820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.801008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.675 [2024-07-14 01:20:23.801035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.675 qpair failed and we were unable to recover it. 00:34:34.675 [2024-07-14 01:20:23.801214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.801240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.801428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.801454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.801624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.801650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.801825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.801851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.802048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.802074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.802256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.802282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.802459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.802485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.802656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.802682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.802856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.802889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.803040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.803066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.803246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.803273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.803449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.803475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.803647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.803674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.803875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.803902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.804045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.804071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.804246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.804272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.804420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.804448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.804620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.804646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.804794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.804820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.805034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.805061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.805260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.805286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.805442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.805468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.805619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.805644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.805796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.805823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.806006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.806038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.806241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.806267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.806473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.806498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.806644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.806669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.806821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.806847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.807042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.807068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.807270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.807295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.807451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.807477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.807655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.807681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.807857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.807891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.808077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.808102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.808302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.808327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.808532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.808558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.808729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.808755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.808948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.808974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.809144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.809170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.676 qpair failed and we were unable to recover it. 00:34:34.676 [2024-07-14 01:20:23.809353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.676 [2024-07-14 01:20:23.809379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.809560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.809585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.809736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.809762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.809939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.809966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.810117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.810143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.810343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.810368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.810587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.810612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.810769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.810794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.810971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.810998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.811141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.811166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.811345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.811370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.811520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.811547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.811723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.811748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.811926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.811953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.812106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.812133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.812307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.812332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.812532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.812558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.812701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.812727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.812903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.812929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.813131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.813156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.813335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.813361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.813539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.813565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.813737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.813762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.813962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.813988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.814138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.814168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.814346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.814371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.814576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.814602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.814774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.814800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.814976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.815002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.815177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.815203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.815360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.815385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.815555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.815581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.815728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.815754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.815930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.815957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.816131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.816157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.816305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.816331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.816503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.816529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.816697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.816723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.816880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.816906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.817077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.817102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.817280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.677 [2024-07-14 01:20:23.817307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.677 qpair failed and we were unable to recover it. 00:34:34.677 [2024-07-14 01:20:23.817485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.817512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.817716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.817742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.817902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.817930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.818093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.818119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.818266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.818292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.818472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.818497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.818647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.818673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.818824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.818851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.819058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.819084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.819262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.819287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.819445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.819471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.819643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.819670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.819846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.819877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.820056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.820082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.820236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.820262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.820446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.820471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.820626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.820652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.820854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.820891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.821037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.821063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.821262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.821305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.821507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.821532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.821737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.821763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.821919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.821946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.822165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.822212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.822385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.822428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.822639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.822682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.822885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.822911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.823061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.823088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.823291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.823335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.823570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.823612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.823787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.823813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.824015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.824041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.824250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.824279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.824452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.824479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.824661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.824687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.824874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.824900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.825081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.825107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.825283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.825327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.825555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.825598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.825750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.825776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.678 [2024-07-14 01:20:23.825957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.678 [2024-07-14 01:20:23.825984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.678 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.826136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.826162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.826309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.826335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.826559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.826602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.826750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.826776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.826957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.826984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.827174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.827203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.827425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.827469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.827622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.827648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.827852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.827885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.828068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.828094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.828270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.828296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.828493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.828536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.828713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.828738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.828921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.828947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.829161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.829189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.829366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.829392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.829591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.829635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.829785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.829810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.829972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.829998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.830221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.830264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.830497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.830540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.830725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.830750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.830927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.830957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.831148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.831177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.831408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.831450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.831627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.831653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.831827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.831853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.832010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.832036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.832239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.832282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.832486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.832530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.832705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.832731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.832905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.832931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.833113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.833139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.833295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.833321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.679 qpair failed and we were unable to recover it. 00:34:34.679 [2024-07-14 01:20:23.833498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.679 [2024-07-14 01:20:23.833524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.833672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.833698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.833849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.833886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.834067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.834093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.834271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.834314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.834510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.834538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.834710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.834736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.834957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.835001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.835204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.835247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.835415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.835460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.835674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.835700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.835919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.835948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.836194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.836236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.836444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.836486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.836684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.836712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.836919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.836949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.837197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.837240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.837445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.837488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.837688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.837717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.837933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.837958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.838104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.838129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.838329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.838355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.838589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.838631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.838807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.838833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.839012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.839038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.839233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.839276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.839464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.839507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.839713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.839739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.839891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.839921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.840121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.840147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.840349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.840391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.840615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.840644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.840831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.840856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.841040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.841066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.841243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.841286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.841509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.841551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.841728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.841754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.841957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.841983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.842160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.842185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.842353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.842378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.842571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.680 [2024-07-14 01:20:23.842617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.680 qpair failed and we were unable to recover it. 00:34:34.680 [2024-07-14 01:20:23.842822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.842847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.843023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.843049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.843253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.843296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.843522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.843565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.843765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.843791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.843971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.843997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.844170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.844195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.844421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.844463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.844657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.844686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.844885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.844912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.845086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.845112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.845310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.845353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.845544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.845572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.845769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.845795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.845989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.846015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.846198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.846224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.846402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.846428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.846636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.846662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.846845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.846877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.847081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.847107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.847335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.847379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.847578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.847606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.847800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.847825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.848008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.848035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.848235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.848278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.848456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.848502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.848684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.848709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.848884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.848914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.849072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.849098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.849293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.849335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.849550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.849593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.849778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.849803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.849956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.849982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.850170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.850196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.850367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.850393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.850576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.850601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.850801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.850826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.851039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.851066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.851213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.851239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.851386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.851412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.681 qpair failed and we were unable to recover it. 00:34:34.681 [2024-07-14 01:20:23.851612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.681 [2024-07-14 01:20:23.851655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.851871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.851897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.852050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.852076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.852278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.852307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.852548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.852592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.852743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.852768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.852913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.852939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.853122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.853147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.853323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.853349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.853547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.853591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.853761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.853787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.853938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.853964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.854118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.854145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.854325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.854352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.854561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.854604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.854781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.854807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.854998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.855024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.855182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.855208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.855388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.855414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.855596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.855622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.855821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.855847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.856038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.856064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.856263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.856306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.856508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.856552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.856732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.856758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.856912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.856940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.857117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.857142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.857341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.857391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.857589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.857632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.857816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.857842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.858004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.858031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.858182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.858208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.858433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.858476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.858662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.858688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.858875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.858902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.859049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.859075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.859276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.859320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.859526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.859570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.859761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.859787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.859953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.859980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.860175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.682 [2024-07-14 01:20:23.860204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.682 qpair failed and we were unable to recover it. 00:34:34.682 [2024-07-14 01:20:23.860400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.860444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.860646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.860672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.860851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.860883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.861032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.861058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.861234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.861276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.861516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.861559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.861738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.861764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.861910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.861936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.862081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.862107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.862301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.862344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.862566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.862609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.862783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.862809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.862980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.863006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.863187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.863230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.863429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.863458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.863679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.863723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.863903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.863929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.864140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.864166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.864338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.864382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.864625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.864669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.864822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.864848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.865001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.865027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.865227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.865271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.865504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.865548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.865712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.865738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.865923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.865949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.866103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.866129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.866339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.866367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.866529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.866555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.866701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.866727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.866934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.866960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.867106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.867132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.867286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.867313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.867468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.867494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.867670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.683 [2024-07-14 01:20:23.867696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.683 qpair failed and we were unable to recover it. 00:34:34.683 [2024-07-14 01:20:23.867899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.867926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.868108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.868133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.868308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.868334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.868478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.868504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.868683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.868708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.868888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.868914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.869062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.869087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.869237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.869262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.869462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.869505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.869682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.869707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.869912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.869939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.870088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.870114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.870318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.870345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.870592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.870634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.870784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.870810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.870964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.870991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.871143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.871169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.871319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.871346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.871557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.871587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.871766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.871792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.871960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.872002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.872234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.872260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.872439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.872467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.872644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.872670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.872848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.872880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.873104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.873145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.873333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.873375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.873559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.873585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.873789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.873815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.873968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.873995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.874148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.874173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.874378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.874404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.874582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.874608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.874791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.874817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.874997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.875023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.875203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.875229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.875443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.875468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.875628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.875655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.875857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.875889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.876069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.876095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.684 qpair failed and we were unable to recover it. 00:34:34.684 [2024-07-14 01:20:23.876266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.684 [2024-07-14 01:20:23.876292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.876496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.876522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.876697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.876723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.876898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.876924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.877080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.877106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.877292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.877318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.877517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.877544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.877694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.877721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.877926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.877953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.878161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.878186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.878364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.878391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.878564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.878590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.878777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.878802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.878987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.879014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.879188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.879213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.879413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.879439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.879617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.879644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.879852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.879883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.880086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.880118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.880329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.880354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.880530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.880555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.880702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.880728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.880936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.880962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.881110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.881136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.881333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.881359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.881535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.881560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.881734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.881760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.881927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.881953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.882126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.882152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.882306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.882332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.882505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.882531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.882724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.882764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.882938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.882967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.883150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.883176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.883378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.883403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.883577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.883602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.883748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.883774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.883933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.883959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.884103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.884128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.884327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.884352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.685 [2024-07-14 01:20:23.884497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.685 [2024-07-14 01:20:23.884522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.685 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.884670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.884695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.884878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.884904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.885074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.885099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.885270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.885295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.885473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.885498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.885671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.885695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.885840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.885872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.886022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.886047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.886249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.886273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.886450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.886475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.886624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.886649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.886799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.886824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.887010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.887036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.887238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.887263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.887433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.887458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.887632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.887657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.887811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.887836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.887985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.888011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.888190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.888215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.888392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.888417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.888569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.888594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.888739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.888764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.888956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.888982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.889134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.889160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.889310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.889334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.889504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.889529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.889731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.889756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.889932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.889958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.890160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.890185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.890361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.890386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.890541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.890566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.890746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.890775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.890984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.891010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.891158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.891182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.891330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.891355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.891556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.891581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.686 [2024-07-14 01:20:23.891763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.686 [2024-07-14 01:20:23.891788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.686 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.891967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.891992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.892173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.892198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.892409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.892451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.892645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.892673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.892872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.892915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.893096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.893120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.893319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.893347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.893548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.893573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.893785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.893813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.893995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.894021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.894168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.894193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.894389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.894417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.894637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.894664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.894860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.894892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.895067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.895092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.895301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.895329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.895490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.895518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.895736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.895764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.895970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.895996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.896165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.896193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.896362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.896389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.896586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.896618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.896810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.896839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.897045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.897070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.897250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.897275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.897498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.897526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.897701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.897729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.897903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.897945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.898123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.898163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.898359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.898384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.898616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.898644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.898876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.898919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.899073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.899098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.899280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.899305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.899475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.899503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.899693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.899721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.899882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.899924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.900104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.900129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.900275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.900300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.900443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.900468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.687 qpair failed and we were unable to recover it. 00:34:34.687 [2024-07-14 01:20:23.900636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.687 [2024-07-14 01:20:23.900661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.900813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.900837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.901021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.901047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.901251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.901275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.901449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.901474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.901620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.901645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.901818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.901843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.902024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.902065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.902273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.902306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.902495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.902521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.902728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.902753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.902909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.902936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.903122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.903148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.903334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.903360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.903565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.903591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.903753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.903778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.903954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.903980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.904135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.904162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.904360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.904386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.904534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.904559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.904761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.904787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.904996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.905022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.905177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.905204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.905364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.905390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.905570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.905595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.905774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.905799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.905980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.906007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.906153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.906178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.906352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.906378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.906555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.906581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.906760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.906785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.906961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.906987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.688 qpair failed and we were unable to recover it. 00:34:34.688 [2024-07-14 01:20:23.907169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.688 [2024-07-14 01:20:23.907194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.907370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.907396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.907551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.907576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.907781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.907806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.908011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.908037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.908224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.908249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.908424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.908449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.908624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.908650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.908820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.908845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.909037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.909063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.909244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.909269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.909441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.909467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.909640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.909666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.909848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.909882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.910088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.910113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.910257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.910282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.910455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.910480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.910639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.910665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.910841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.910875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.911024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.911049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.911229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.911254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.911411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.911437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.911618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.911643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.911814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.911840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.912027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.912053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.912229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.912254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.912434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.912459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.912637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.912663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.912809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.912834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.912989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.913014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.913202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.913227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.913404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.913430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.913581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.913607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.913763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.913788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.913964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.913990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.914134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.914161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.914364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.914389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.914592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.914617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.914775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.914801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.915003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.915029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.689 qpair failed and we were unable to recover it. 00:34:34.689 [2024-07-14 01:20:23.915240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.689 [2024-07-14 01:20:23.915266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.915473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.915498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.915706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.915731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.915890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.915921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.916100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.916125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.916329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.916354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.916504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.916531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.916737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.916762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.916941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.916967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.917144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.917170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.917345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.917370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.917546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.917571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.917739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.917764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.917941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.917967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.918141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.918167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.918338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.918363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.918537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.918562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.918744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.918769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.918954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.918980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.919134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.919159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.919364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.919389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.919566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.919591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.919734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.919759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.919963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.919989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.920141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.920166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.920344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.920369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.920545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.920571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.920749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.920774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.920954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.920981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.921171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.921196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.921376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.921401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.921582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.921607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.921781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.921806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.921983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.922010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.922193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.922219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.922398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.922423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.922624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.922649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.922818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.922844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.922998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.923024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.923203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.923229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.690 [2024-07-14 01:20:23.923405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.690 [2024-07-14 01:20:23.923430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.690 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.923581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.923607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.923791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.923817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.923993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.924023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.924202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.924228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.924436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.924462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.924608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.924633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.924778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.924805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.924980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.925006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.925210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.925235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.925385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.925410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.925549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.925575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.925781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.925807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.925998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.926023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.926229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.926255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.926398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.926424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.926597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.926623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.926817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.926842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.927052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.927078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.927246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.927271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.927451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.927476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.927690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.927715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.927860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.927890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.928074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.928099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.928283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.928308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.928516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.928541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.928716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.928741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.928919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.928945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.929121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.929146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.929323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.929348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.929525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.929551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.929727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.929752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.929956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.929982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.930129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.930154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.930356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.930381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.930562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.930587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.930741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.930768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.930981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.691 [2024-07-14 01:20:23.931007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.691 qpair failed and we were unable to recover it. 00:34:34.691 [2024-07-14 01:20:23.931165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.931191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.931369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.931394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.931569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.931595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.931771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.931796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.931956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.931982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.932183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.932212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.932411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.932436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.932592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.932617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.932787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.932812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.933016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.933042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.933215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.933241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.933423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.933448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.933626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.933651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.933857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.933888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.934037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.934063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.934277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.934302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.934476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.934501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.934678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.934704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.934921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.934947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.935098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.935123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.935306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.935332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.935508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.935534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.935736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.935762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.935970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.935996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.936168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.936194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.936374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.936401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.936543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.936569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.936741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.936766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.936941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.936967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.937148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.937173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.937350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.937375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.937581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.937607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.937793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.937819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.937991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.938017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.692 qpair failed and we were unable to recover it. 00:34:34.692 [2024-07-14 01:20:23.938217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.692 [2024-07-14 01:20:23.938243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.938386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.938412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.938594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.938619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.938802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.938827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.938989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.939015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.939193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.939218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.939401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.939426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.939599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.939624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.939803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.939829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.939979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.940005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.940184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.940209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.940348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.940377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.940578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.940604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.940812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.940837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.941015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.941041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.941190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.941215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.941392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.941417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.941598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.941624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.941805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.941830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.942037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.942063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.942209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.942235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.942387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.942412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.942586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.942611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.942789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.942815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.943005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.943031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.943208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.943233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.943408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.943434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.943579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.943604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.943755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.943781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.943957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.943983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.944131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.944156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.944356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.944381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.944528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.944553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.944707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.944734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.944922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.944948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.945096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.945121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.945293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.945320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.693 [2024-07-14 01:20:23.945531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.693 [2024-07-14 01:20:23.945557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.693 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.945737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.945762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.945941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.945967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.946142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.946169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.946346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.946371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.946550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.946575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.946754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.946779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.946951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.946977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.947136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.947163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.947310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.947337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.947509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.947534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.947738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.947763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.947919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.947945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.948131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.948157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.948333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.948362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.948514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.948540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.948718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.948745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.948920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.948946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.949129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.949154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.949360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.949386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.949564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.949589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.949733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.949759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.949903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.949930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.950088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.950113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.950313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.950339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.950511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.950536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.950717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.950742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.950924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.950950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.951095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.951120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.951273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.951299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.951454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.951480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.951685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.951710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.951861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.951892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.952041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.952067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.952224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.952249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.952424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.952449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.952626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.952652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.952822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.952847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.953035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.953061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.953231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.953256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.694 [2024-07-14 01:20:23.953433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.694 [2024-07-14 01:20:23.953459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.694 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.953666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.953692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.953833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.953859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.954067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.954093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.954275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.954300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.954480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.954507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.954682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.954708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.954885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.954911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.955116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.955142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.955343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.955368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.955522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.955547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.955722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.955748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.955925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.955952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.956154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.956179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.956380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.956409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.956556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.956581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.956801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.956826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.956982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.957008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.957152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.957178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.957378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.957403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.957549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.957576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.957780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.957806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.957950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.957976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.958177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.958202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.958382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.958408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.958555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.958582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.958731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.958758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.958964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.958990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.959148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.959174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.959321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.959346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.959547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.959572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.959712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.959737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.959921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.959946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.960135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.960160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.960372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.960397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.960572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.960597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.960781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.960806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.960980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.961006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.961155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.961180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.961351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.961376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.961550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.695 [2024-07-14 01:20:23.961577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.695 qpair failed and we were unable to recover it. 00:34:34.695 [2024-07-14 01:20:23.961761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.961786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.961961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.961986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.962167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.962193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.962377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.962402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.962550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.962575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.962753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.962780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.962959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.962985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.963135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.963160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.963310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.963335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.963502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.963527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.963730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.963754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.963908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.963934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.964120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.964146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.964287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.964317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.964522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.964548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.964717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.964742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.964947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.964973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.965117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.965144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.965321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.965346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.965553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.965579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.965754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.965779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.965930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.965956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.966132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.966157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.966331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.966357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.966531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.966556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.966725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.966750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.966910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.966936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.967101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.967127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.967331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.967356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.967504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.967529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.967677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.967703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.967851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.967883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.968060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.968086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.968287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.968313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.968485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.968510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.968717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.968743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.968946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.968972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.969176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.969201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.969345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.969370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.969573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.696 [2024-07-14 01:20:23.969598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.696 qpair failed and we were unable to recover it. 00:34:34.696 [2024-07-14 01:20:23.969782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.969807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.969986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.970013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.970188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.970213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.970417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.970442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.970616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.970642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.970848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.970878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.971051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.971076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.971226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.971253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.971425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.971450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.971630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.971655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.971830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.971855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.972022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.972048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.972249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.972274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.972418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.972447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.972616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.972641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.972827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.972852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.973034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.973059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.973214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.973239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.973417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.973442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.973622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.973647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.973820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.973845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.974028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.974054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.974227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.974252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.974455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.974480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.974655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.974680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.974837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.974862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.975060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.975085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.975247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.975273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.975453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.975478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.975626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.975651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.975828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.975853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.976035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.976060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.976268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.976293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.976439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.976464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.697 [2024-07-14 01:20:23.976634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.697 [2024-07-14 01:20:23.976659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.697 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.976830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.976855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.977070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.977096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.977264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.977289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.977496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.977521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.977703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.977728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.977939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.977966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.978169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.978195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.978354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.978379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.978593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.978618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.978767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.978792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.978971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.978997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.979178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.979204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.979409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.979435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.979611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.979636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.979817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.979842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.980014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.980040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.980245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.980270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.980442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.980467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.980666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.980696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.980876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.980901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.981053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.981079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.981282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.981308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.981488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.981513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.981687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.981713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.981894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.981920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.982098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.982123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.982324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.982349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.982519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.982545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.982701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.982726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.982910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.982936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.983134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.983160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.983334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.983359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.983543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.983569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.983746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.983771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.983940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.983966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.984146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.984172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.984380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.984405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.984580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.984605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.984757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.984782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.984954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.698 [2024-07-14 01:20:23.984980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.698 qpair failed and we were unable to recover it. 00:34:34.698 [2024-07-14 01:20:23.985155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.985180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.985328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.985353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.985529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.985555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.985736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.985762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.985905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.985932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.986104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.986130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.986317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.986342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.986522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.986547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.986722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.986747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.986904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.986930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.987106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.987132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.987334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.987360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.987516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.987541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.987739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.987764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.987964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.987990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.988193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.988219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.988401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.988428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.988609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.988634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.988834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.988863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.989049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.989075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.989259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.989284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.989462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.989487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.989635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.989660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.989814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.989839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.990020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.990047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.990228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.990255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.990434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.990460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.990660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.990685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.990860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.990901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.991078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.991105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.991255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.991282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.991467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.991492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.991643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.991669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.991848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.991879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.992049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.992075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.992231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.992256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.992407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.992432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.992577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.992602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.992739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.992764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.992941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.992967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.699 qpair failed and we were unable to recover it. 00:34:34.699 [2024-07-14 01:20:23.993171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.699 [2024-07-14 01:20:23.993196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.993376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.993401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.993575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.993600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.993779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.993804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.994010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.994036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.994213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.994239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.994413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.994439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.994592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.994617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.994819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.994845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.995034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.995059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.995265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.995290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.995433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.995460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.995658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.995684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.995833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.995858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.996045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.996070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.996272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.996297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.996499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.996524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.996691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.996717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.996922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.996951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.997126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.997153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.997296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.997321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.997499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.997524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.997700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.997727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.997908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.997935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.998140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.998165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.998348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.998374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.998546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.998572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.998717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.998743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.998894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.998921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.999112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.999137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.999337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.999363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.999535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.999561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.999745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:23.999772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:23.999974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:24.000000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:24.000150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:24.000176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:24.000353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:24.000378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:24.000554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:24.000580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:24.000731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:24.000756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:24.000902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:24.000928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:24.001087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:24.001112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.700 qpair failed and we were unable to recover it. 00:34:34.700 [2024-07-14 01:20:24.001263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.700 [2024-07-14 01:20:24.001290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.001471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.001497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.001641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.001668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.001855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.001890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.002090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.002116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.002297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.002323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.002500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.002525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.002704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.002731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.002940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.002967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.003146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.003172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.003338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.003364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.003509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.003536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.003683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.003709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.003870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.003896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.004074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.004101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.004276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.004301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.004484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.004509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.004682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.004708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.004886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.004916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.005072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.005097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.005274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.005299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.005500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.005525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.005700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.005725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.005907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.005933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.006078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.006103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.006306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.006331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.006485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.006511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.006669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.006695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.006848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.006891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.007079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.007104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.007283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.007308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.007508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.007534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.007688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.007714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.007886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.007913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.008090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.008115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.008270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.701 [2024-07-14 01:20:24.008296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.701 qpair failed and we were unable to recover it. 00:34:34.701 [2024-07-14 01:20:24.008504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.008530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.008676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.008703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.008905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.008931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.009086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.009112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.009317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.009343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.009544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.009570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.009722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.009747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.009919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.009946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.010122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.010148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.010294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.010320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.010501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.010527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.010710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.010735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.010912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.010939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.011118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.011145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.011294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.011320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.011494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.011520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.011673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.011699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.011878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.011904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.012104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.012129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.012304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.012330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.012484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.012510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.012713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.012739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.012944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.012970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.013180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.013205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.013408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.013434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.013614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.013639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.013782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.013808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.013947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.013973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.014178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.014203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.014383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.014408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.014585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.014610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.014758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.014784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.014992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.015018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.015207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.015232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.015380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.015406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.015560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.015586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.015763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.015788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.015964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.015991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.016193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.016219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.016374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.016400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.702 [2024-07-14 01:20:24.016550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.702 [2024-07-14 01:20:24.016576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.702 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.016732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.016758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.016938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.016964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.017147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.017172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.017349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.017375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.017555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.017580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.017735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.017761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.017976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.018002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.018206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.018232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.018430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.018460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.018646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.018672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.018816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.018841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.019023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.019062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.019249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.019277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.019450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.019493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.019693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.019737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.019896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.019923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.020127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.020153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.020380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.020424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.020629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.020672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.020889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.020915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.021092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.021118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.021294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.021338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.021546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.021589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.021798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.021824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.022008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.022034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.022265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.022307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.022475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.022518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.022700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.022726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.022919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.022948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.023167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.023210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.023411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.023454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.023688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.023732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.023953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.023996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.024224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.024268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.024496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.024539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.024756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.024782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.025076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.025130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.025362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.025406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.025615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.025658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.025836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.025862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.703 qpair failed and we were unable to recover it. 00:34:34.703 [2024-07-14 01:20:24.026017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.703 [2024-07-14 01:20:24.026042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.026243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.026286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.026493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.026536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.026748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.026773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.026954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.026981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.027220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.027264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.027465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.027509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.027691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.027717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.027938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.028037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.028238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.028280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.028462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.028504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.028688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.028713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.028916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.028942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.029110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.029153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.029349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.029393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.029607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.029633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.029807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.029833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.030065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.030108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.030346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.030389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.030564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.030606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.030782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.030808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.031016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.031045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.031306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.031350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.031698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.031752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.031935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.031962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.032188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.032231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.032425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.032469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.032701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.032744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.032945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.032988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.033182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.033225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.033417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.033461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.033698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.033724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.034011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.034063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.034263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.034306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.034535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.034578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.034791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.034816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.034998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.035041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.035270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.035313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.035481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.035524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.035730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.035756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.035927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.704 [2024-07-14 01:20:24.035971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.704 qpair failed and we were unable to recover it. 00:34:34.704 [2024-07-14 01:20:24.036169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.036211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.036402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.036444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.036626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.036668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.036845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.036876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.037080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.037123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.037360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.037402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.037610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.037653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.037825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.037855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.038094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.038138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.038367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.038410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.038645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.038688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.038889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.038916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.039113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.039139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.039310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.039354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.039529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.039571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.039775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.039801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.039983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.040009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.040219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.040262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.040462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.040505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.040681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.040708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.040889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.040915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.041122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.041166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.041373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.041417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.041621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.041665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.041820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.041845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.042033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.042059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.042263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.042305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.042535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.042578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.042762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.042788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.042990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.043036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.043217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.043260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.043463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.043506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.043708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.043734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.043957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.044001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.044184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.044227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.705 [2024-07-14 01:20:24.044456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.705 [2024-07-14 01:20:24.044498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.705 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.044675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.044701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.044878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.044904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.045082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.045125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.045325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.045369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.045564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.045593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.045784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.045810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.046041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.046084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.046311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.046354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.046552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.046595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.046776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.046802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.047010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.047053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.047219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.047268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.047440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.047485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.047686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.047713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.047946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.047989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.048192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.048235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.048459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.048501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.048713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.048739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.048894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.048931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.049135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.049178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.049377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.049407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.049600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.049627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.049803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.049829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.050076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.050120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.050286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.050330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.050537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.050580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.050762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.050787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.050986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.051029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.051232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.051275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.051474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.051503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.051707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.051733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.051965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.052009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.052220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.052263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.052455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.052484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.052657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.052683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.052886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.052922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.053102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.053145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.053331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.053358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.053595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.053638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.053821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.706 [2024-07-14 01:20:24.053846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.706 qpair failed and we were unable to recover it. 00:34:34.706 [2024-07-14 01:20:24.054054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.054098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.054305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.054348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.054543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.054586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.054764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.054790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.054953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.054999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.055205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.055248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.055469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.055512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.055671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.055698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.055850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.055885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.056102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.056146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.056354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.056397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.056604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.056651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.056799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.056826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.057031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.057057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.057243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.057287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.057457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.057500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.057671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.057697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.057914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.057940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.058171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.058214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.058397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.058444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.058645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.058690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.058878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.058908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.059106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.059136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.059364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.059406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.059641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.059684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.059846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.059879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.060055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.060081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.060285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.060329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.060527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.060570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.060739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.060765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.060947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.060974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.061174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.061218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.061388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.061432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.061616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.061643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.061852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.061885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.062091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.062135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.062362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.062390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.062641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.062684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.062904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.062941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.063152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.063195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.707 [2024-07-14 01:20:24.063362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.707 [2024-07-14 01:20:24.063405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.707 qpair failed and we were unable to recover it. 00:34:34.708 [2024-07-14 01:20:24.063617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.708 [2024-07-14 01:20:24.063660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.708 qpair failed and we were unable to recover it. 00:34:34.708 [2024-07-14 01:20:24.063840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.708 [2024-07-14 01:20:24.063874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.708 qpair failed and we were unable to recover it. 00:34:34.708 [2024-07-14 01:20:24.064113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.708 [2024-07-14 01:20:24.064156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.708 qpair failed and we were unable to recover it. 00:34:34.984 [2024-07-14 01:20:24.064366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.984 [2024-07-14 01:20:24.064411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.984 qpair failed and we were unable to recover it. 00:34:34.984 [2024-07-14 01:20:24.064580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.984 [2024-07-14 01:20:24.064624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.984 qpair failed and we were unable to recover it. 00:34:34.984 [2024-07-14 01:20:24.064806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.984 [2024-07-14 01:20:24.064832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.984 qpair failed and we were unable to recover it. 00:34:34.984 [2024-07-14 01:20:24.065066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.984 [2024-07-14 01:20:24.065110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.984 qpair failed and we were unable to recover it. 00:34:34.984 [2024-07-14 01:20:24.065341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.984 [2024-07-14 01:20:24.065384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.984 qpair failed and we were unable to recover it. 00:34:34.984 [2024-07-14 01:20:24.065618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.984 [2024-07-14 01:20:24.065660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.984 qpair failed and we were unable to recover it. 00:34:34.984 [2024-07-14 01:20:24.065838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.984 [2024-07-14 01:20:24.065864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.984 qpair failed and we were unable to recover it. 00:34:34.984 [2024-07-14 01:20:24.066044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.984 [2024-07-14 01:20:24.066078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.984 qpair failed and we were unable to recover it. 00:34:34.984 [2024-07-14 01:20:24.066261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.984 [2024-07-14 01:20:24.066304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.984 qpair failed and we were unable to recover it. 00:34:34.984 [2024-07-14 01:20:24.066480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.984 [2024-07-14 01:20:24.066523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.066706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.066731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.066891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.066918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.067113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.067156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.067366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.067409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.067585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.067628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.067796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.067822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.068020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.068064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.068277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.068304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.068476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.068519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.068695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.068722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.068915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.068944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.069170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.069215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.069413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.069457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.069613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.069639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.069817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.069844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.070035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.070078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.070316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.070359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.070578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.070620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.070789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.070815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.070987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.071033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.071242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.071286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.071489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.071532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.071734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.071759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.071958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.072004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.072211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.072255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.072466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.072493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.072637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.072663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.072811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.072840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.073056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.073099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.073308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.073351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.073524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.073567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.073742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.073768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.073971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.074014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.074211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.074254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.074424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.074468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.074644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.074670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.074823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.074848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.075096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.075144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.075374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.075417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.985 qpair failed and we were unable to recover it. 00:34:34.985 [2024-07-14 01:20:24.075595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.985 [2024-07-14 01:20:24.075621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.075775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.075801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.075970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.076014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.076246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.076288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.076519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.076561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.076742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.076768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.076992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.077040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.077234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.077278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.077480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.077524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.077734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.077759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.077958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.078002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.078235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.078278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.078455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.078499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.078653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.078680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.078863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.078896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.079113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.079155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.079356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.079399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.079593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.079636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.079806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.079832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.080034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.080079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.080255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.080299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.080535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.080579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.080751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.080777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.080952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.080979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.081209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.081252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.081433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.081480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.081657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.081684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.081869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.081896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.082069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.082094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.082328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.082370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.082605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.082648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.082849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.082888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.083101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.083127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.083358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.083400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.083629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.083673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.083876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.083903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.084098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.084124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.084323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.084369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.084546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.084595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.084780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.084806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.986 [2024-07-14 01:20:24.084985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.986 [2024-07-14 01:20:24.085020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.986 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.085265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.085308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.085509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.085553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.085757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.085783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.085990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.086016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.086226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.086255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.086470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.086514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.086674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.086700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.086902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.086929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.087162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.087204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.087388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.087432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.087605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.087649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.087834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.087860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.088073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.088102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.088345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.088389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.088582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.088611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.088803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.088829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.089045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.089075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.089291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.089334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.089536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.089579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.089752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.089778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.089978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.090028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.090283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.090310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.090538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.090581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.090768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.090794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.091123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.091186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.091603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.091652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.091899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.091933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.092166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.092194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.092462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.092515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.092715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.092740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.092943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.092973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.093186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.093214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.093611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.093672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.093864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.093894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.094073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.094101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.094295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.094323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.094705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.094766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.095002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.095035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.095225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.095253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.095479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.987 [2024-07-14 01:20:24.095507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.987 qpair failed and we were unable to recover it. 00:34:34.987 [2024-07-14 01:20:24.095726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.095751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.095903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.095929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.096143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.096169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.096372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.096398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.096578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.096604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.096807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.096832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.097012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.097041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.097295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.097323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.097688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.097741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.097931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.097960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.098224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.098253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.098507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.098534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.098729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.098755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.099070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.099132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.099395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.099423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.099594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.099620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.099795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.099820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.100001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.100030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.100250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.100278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.100484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.100512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.100735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.100760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.100993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.101044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.101243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.101271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.101524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.101552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.101750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.101780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.101995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.102023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.102263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.102307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.102516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.102559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.102741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.102766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.103033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.103086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.103306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.103349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.103564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.103606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.103786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.103811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.104182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.104231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.104429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.104472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.104680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.104723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.104962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.105007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.105382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.105441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.988 [2024-07-14 01:20:24.105670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.988 [2024-07-14 01:20:24.105714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.988 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.105862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.105895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.106104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.106129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.106326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.106370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.106598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.106640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.106850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.106883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.107106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.107131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.107307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.107349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.107527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.107571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.107745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.107771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.107949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.107976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.108212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.108254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.108436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.108482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.108714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.108756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.109056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.109109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.109383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.109427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.109573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.109600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.109777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.109803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.110006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.110048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.110228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.110271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.110462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.110504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.110653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.110679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.110859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.110891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.111035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.111062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.111239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.111282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.111512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.111554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.111710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.111735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.111887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.111913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.112142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.112185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.112356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.112399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.112583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.112609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.112755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.112782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.112976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.113020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.113219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.113263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.113476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.113503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.989 qpair failed and we were unable to recover it. 00:34:34.989 [2024-07-14 01:20:24.113657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.989 [2024-07-14 01:20:24.113683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.113861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.113893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.114099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.114142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.114379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.114422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.114648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.114695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.114886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.114913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.115089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.115115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.115319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.115362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.115596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.115639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.115810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.115836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.116041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.116085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.116323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.116366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.116586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.116629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.116826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.116851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.117002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.117028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.117231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.117260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.117468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.117497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.117706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.117748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.117952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.117996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.118216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.118242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.118471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.118514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.118658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.118684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.118890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.118917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.119095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.119140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.119373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.119416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.119613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.119641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.119804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.119829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.119994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.120021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.120227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.120269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.120472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.120513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.120663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.120688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.120887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.120920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.121120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.121149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.121378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.121408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.121630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.121658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.121858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.121889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.122092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.122117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.122351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.122379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.122607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.122636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.122806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.122832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.990 [2024-07-14 01:20:24.123025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.990 [2024-07-14 01:20:24.123050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.990 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.123220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.123246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.123400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.123426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.123602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.123627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.123801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.123832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.124017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.124059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.124256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.124281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.124485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.124511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.124690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.124716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.124863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.124895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.125105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.125134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.125470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.125524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.125745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.125770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.125968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.125997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.126181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.126209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.126506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.126557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.126749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.126774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.126967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.126996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.127210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.127238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.127464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.127493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.127668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.127694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.127846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.127885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.128064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.128106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.128389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.128418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.128591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.128617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.128761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.128786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.128965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.128994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.129297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.129353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.129632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.129660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.129857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.129888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.130108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.130136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.130336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.130365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.130596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.130621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.130824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.130849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.131042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.131071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.131453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.131505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.131698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.131723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.131916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.131945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.132173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.132202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.132417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.132445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.991 [2024-07-14 01:20:24.132616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.991 [2024-07-14 01:20:24.132641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.991 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.132817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.132842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.133013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.133041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.133288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.133317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.133579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.133607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.133810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.133836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.134021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.134052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.134261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.134290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.134516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.134545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.134716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.134743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.134932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.134961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.135157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.135186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.135414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.135442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.135665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.135690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.135875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.135917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.136170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.136198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.136597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.136657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.136849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.136882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.137116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.137144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.137577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.137630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.137851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.137882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.138117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.138145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.138405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.138434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.138744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.138774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.138970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.138996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.139176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.139201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.139370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.139395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.139550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.139575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.139750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.139775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.139952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.139978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.140161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.140186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.140332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.140362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.140535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.140561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.140716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.140742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.140911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.140937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.141087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.141112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.141294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.141319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.141498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.141524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.141727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.141753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.141905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.141931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.992 [2024-07-14 01:20:24.142104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.992 [2024-07-14 01:20:24.142129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.992 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.142285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.142312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.142480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.142506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.142712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.142738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.142917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.142943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.143127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.143153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.143302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.143328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.143505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.143530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.143730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.143755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.143956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.143983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.144161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.144186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.144333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.144358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.144560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.144585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.144755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.144781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.144956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.144982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.145163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.145188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.145384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.145409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.145608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.145633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.145806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.145831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.146014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.146041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.146186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.146212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.146380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.146405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.146579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.146604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.146754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.146779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.146954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.146981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.147151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.147177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.147348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.147373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.147550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.147575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.147725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.147750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.147950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.147976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.148183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.148208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.148388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.148418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.148593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.148618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.148770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.148795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.148998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.149024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.149231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.993 [2024-07-14 01:20:24.149256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.993 qpair failed and we were unable to recover it. 00:34:34.993 [2024-07-14 01:20:24.149460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.149486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.149662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.149687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.149858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.149890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.150094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.150120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.150308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.150333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.150488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.150515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.150714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.150740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.150930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.150956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.151103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.151129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.151335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.151361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.151502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.151527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.151712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.151737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.151913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.151939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.152095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.152120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.152294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.152320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.152531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.152557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.152744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.152769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.152970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.152996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.153168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.153193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.153393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.153418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.153572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.153598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.153802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.153828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.153991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.154018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.154223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.154248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.154424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.154450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.154635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.154660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.154877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.154903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.155110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.155135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.155309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.155335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.155513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.155539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.155712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.155737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.155915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.155942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.156097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.156123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.156296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.156322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.156477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.156503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.156675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.156705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.156883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.156909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.157116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.157141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.157317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.157342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.157491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.157517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.994 qpair failed and we were unable to recover it. 00:34:34.994 [2024-07-14 01:20:24.157693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.994 [2024-07-14 01:20:24.157719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.157924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.157950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.158129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.158154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.158351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.158377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.158523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.158548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.158716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.158741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.158890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.158916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.159093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.159118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.159320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.159345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.159556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.159582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.159787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.159813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.160018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.160044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.160186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.160211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.160354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.160379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.160552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.160577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.160792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.160817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.161022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.161048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.161223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.161248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.161397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.161422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.161603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.161629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.161807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.161832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.162019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.162045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.162198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.162225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.162407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.162432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.162628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.162654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.162823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.162849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.163069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.163095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.163252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.163277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.163458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.163483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.163624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.163650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.163859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.163902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.164059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.164085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.164263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.164289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.164467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.164492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.164667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.164692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.164897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.164927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.165104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.165131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.165307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.165332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.165532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.165558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.165711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.165736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.165917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.995 [2024-07-14 01:20:24.165943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.995 qpair failed and we were unable to recover it. 00:34:34.995 [2024-07-14 01:20:24.166144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.166170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.166322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.166347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.166498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.166524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.166691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.166716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.166879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.166905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.167084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.167110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.167316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.167341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.167520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.167545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.167705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.167730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.167908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.167934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.168085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.168112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.168317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.168343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.168521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.168546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.168723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.168748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.168925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.168952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.169131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.169156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.169333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.169359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.169562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.169587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.169762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.169787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.169988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.170014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.170160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.170187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.170368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.170394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.170553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.170579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.170752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.170777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.170978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.171004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.171153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.171179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.171386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.171411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.171590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.171616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.171764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.171789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.171948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.171973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.172149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.172174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.172356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.172381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.172552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.172577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.172756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.172781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.172930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.172960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.173134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.173159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.173336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.173361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.173534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.173559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.173709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.173736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.173906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.996 [2024-07-14 01:20:24.173931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.996 qpair failed and we were unable to recover it. 00:34:34.996 [2024-07-14 01:20:24.174109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.174135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.174303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.174328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.174481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.174507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.174677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.174702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.174856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.174892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.175094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.175119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.175297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.175322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.175496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.175521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.175674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.175701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.175876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.175903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.176108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.176133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.176313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.176338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.176538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.176563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.176761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.176786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.176967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.176994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.177150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.177176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.177322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.177348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.177521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.177546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.177696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.177721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.177897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.177923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.178099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.178124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.178310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.178336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.178540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.178566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.178715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.178741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.178920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.178945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.179119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.179146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.179316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.179342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.179520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.179558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1300583 Killed "${NVMF_APP[@]}" "$@" 00:34:34.997 [2024-07-14 01:20:24.179744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.179771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.179947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.179973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.180180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.180206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.180409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.180435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.180591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.180616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:34.997 [2024-07-14 01:20:24.180792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 [2024-07-14 01:20:24.180837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.997 qpair failed and we were unable to recover it. 00:34:34.997 [2024-07-14 01:20:24.181030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.997 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:34.998 [2024-07-14 01:20:24.181057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.181209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.181235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:34.998 [2024-07-14 01:20:24.181441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.181467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:34.998 [2024-07-14 01:20:24.181616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.181643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.181814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.181839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.182015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.182042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.182222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.182248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.182409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.182434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.182639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.182665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.182841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.182876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.183042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.183068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.183222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.183248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.183429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.183455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.183631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.183658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.183827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.183853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.184018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.184044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.184248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.184275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.184453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.184479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.184627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.184654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.184825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.184850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.185044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.185069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.185276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.185302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.185472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.185497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.185675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.185700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.185878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.185905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.186110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.186136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1301138 00:34:34.998 [2024-07-14 01:20:24.186291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.186319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:34.998 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1301138 00:34:34.998 [2024-07-14 01:20:24.186497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.186524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.186671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1301138 ']' 00:34:34.998 [2024-07-14 01:20:24.186698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.186847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.998 [2024-07-14 01:20:24.186890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:34.998 [2024-07-14 01:20:24.187056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.187084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.998 [2024-07-14 01:20:24.187287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.187313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:34.998 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:34.998 [2024-07-14 01:20:24.187635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.187693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.187887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.187920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.188158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.998 [2024-07-14 01:20:24.188187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.998 qpair failed and we were unable to recover it. 00:34:34.998 [2024-07-14 01:20:24.188413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.188441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.188802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.188849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.189080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.189106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.189329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.189358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.189575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.189603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.189837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.189862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.190082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.190110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.190304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.190333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.190585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.190614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.190837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.190863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.191072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.191101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.191316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.191345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.191543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.191572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.191801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.191828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.192040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.192070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.192302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.192344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.192537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.192566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.192739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.192765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.192959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.192989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.193231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.193260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.193474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.193502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.193691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.193717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.193890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.193934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.194203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.194232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.194439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.194469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.194689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.194724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.194894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.194922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.195076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.195104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.195304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.195348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.195527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.195553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.195755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.195780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.195971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.196015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.196220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.196264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.196533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.196582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.196794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.196820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.197052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.197095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.197303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.197346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.197730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.197789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.198020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.198071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.198281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.198323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.198534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.999 [2024-07-14 01:20:24.198561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:34.999 qpair failed and we were unable to recover it. 00:34:34.999 [2024-07-14 01:20:24.198739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.198765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.198994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.199038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.199236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.199265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.199486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.199528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.199733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.199759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.199977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.200021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.200220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.200263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.200474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.200500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.200681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.200706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.200889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.200914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.201118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.201160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.201411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.201454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.201663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.201706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.201856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.201887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.202091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.202119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.202362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.202404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.202615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.202658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.202862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.202894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.203094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.203141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.203321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.203348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.203687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.203741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.203967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.204011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.204240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.204283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.204492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.204535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.204747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.204777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.204977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.205004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.205202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.205230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.205414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.205443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.205648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.205673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.205848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.205880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.206076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.206105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.206324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.206352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.206619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.206673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.206880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.206907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.207114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.207142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.207369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.207411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.207614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.207640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.207845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.207882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.208114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.208143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.208402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.208430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.000 qpair failed and we were unable to recover it. 00:34:35.000 [2024-07-14 01:20:24.208772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.000 [2024-07-14 01:20:24.208816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.209019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.209045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.209244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.209272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.209496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.209524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.209724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.209749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.209918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.209946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.210141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.210170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.210363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.210391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.210558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.210585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.210737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.210762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.210959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.210989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.211219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.211248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.211449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.211477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.211667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.211692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.211875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.211918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.212159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.212187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.212439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.212491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.212692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.212717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.212899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.212926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.213118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.213159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.213339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.213364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.213560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.213585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.213739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.213766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.213991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.214020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f28000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.214275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.214322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.214640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.214698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.214893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.214921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.215124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.215166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.215431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.215473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.215829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.215890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.216096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.216122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.216321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.216365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.216544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.216571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.216754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.216780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.216958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.216984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.217189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.217217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.217440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.217482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.217657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.001 [2024-07-14 01:20:24.217704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.001 qpair failed and we were unable to recover it. 00:34:35.001 [2024-07-14 01:20:24.217929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.217972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.218174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.218217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.218412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.218441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.218632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.218659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.218812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.218838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.219057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.219084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.219286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.219329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.219532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.219574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.219726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.219752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.219927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.219972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.220177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.220219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.220516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.220571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.220747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.220773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.221003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.221047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.221224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.221266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.221468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.221509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.221664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.221689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.221829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.221856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.222056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.222098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.222335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.222378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.222603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.222645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.222855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.222887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.223062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.223104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.223313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.223338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.223512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.223555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.223755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.223780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.223944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.223971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.224308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.224355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.224524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.224568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.224743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.224769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.225004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.225047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.225342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.225398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.225589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.225633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.225836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.225861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.226017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.226043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.226271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.226313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.226510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.226555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.226734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.002 [2024-07-14 01:20:24.226759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.002 qpair failed and we were unable to recover it. 00:34:35.002 [2024-07-14 01:20:24.226984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.227028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.227218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.227266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.227479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.227506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.227680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.227705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.227915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.227941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.228115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.228158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.228351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.228393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.228618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.228662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.228842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.228876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.229072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.229116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.229340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.229382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.229564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.229607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.229784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.229809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.229995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.230020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.230218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.230260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.230459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.230501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.230720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.230761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.230964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.231009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.231217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.231260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.231493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.231537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.231718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.231743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.231904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.231933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.232184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.232226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.232456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.232499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.232678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.232703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.232902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.232946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.233174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.233216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.233447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.233490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.233666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.233695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.233839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.233870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.234068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.234109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.234316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.234358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.234559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.234585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.234761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.234787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.234978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.235021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.235256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.235300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.235470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.235513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.235689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.235716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.235900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.235943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.236137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.003 [2024-07-14 01:20:24.236132] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:35.003 [2024-07-14 01:20:24.236181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.003 qpair failed and we were unable to recover it. 00:34:35.003 [2024-07-14 01:20:24.236225] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:35.003 [2024-07-14 01:20:24.236413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.236461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.236660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.236685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.236832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.236858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.237066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.237109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.237339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.237383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.237634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.237660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.237837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.237877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.238109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.238152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.238353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.238382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.238772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.238827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.239040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.239083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.239335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.239362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.239562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.239606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.239762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.239788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.240004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.240048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.240285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.240337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.240570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.240613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.240793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.240819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.241021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.241067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.241398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.241449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.241674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.241717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.241933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.241959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.242139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.242182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.242446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.242500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.242728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.242757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.242979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.243023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.243203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.243250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.243567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.243618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.243792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.243818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.244023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.244048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.244248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.244291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.244560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.244609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.244761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.244787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.244959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.244986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.245222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.245265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.245470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.245511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.245662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.245699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.245878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.245904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.246128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.246171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.246529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.004 [2024-07-14 01:20:24.246556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.004 qpair failed and we were unable to recover it. 00:34:35.004 [2024-07-14 01:20:24.246762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.246793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.246976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.247021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.247230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.247273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.247480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.247522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.247696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.247722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.247953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.247997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.251893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.251925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.252157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.252185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.252348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.252376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.252558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.252585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.252768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.252795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.253013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.253041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.253246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.253273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.253488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.253515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.253703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.253729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.253913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.253941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.254161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.254187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.254387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.254425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.254614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.254642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.254826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.254858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.255064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.255107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.255307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.255336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.255553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.255596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.255807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.255831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.256040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.256084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.256295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.256338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.256484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.256510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.256722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.256749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.256949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.256998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.257233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.257277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.257494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.257536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.257720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.257746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.257916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.257945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.258161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.258188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.261408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.261437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.261672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.261701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.261894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.261921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.262192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.262251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.262487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.262529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.262737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.005 [2024-07-14 01:20:24.262762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.005 qpair failed and we were unable to recover it. 00:34:35.005 [2024-07-14 01:20:24.262987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.263037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.263217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.263259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.263456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.263499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.263710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.263737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.263935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.263979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.264180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.264223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.264452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.264495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.264692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.264733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.265035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.265079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.265317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.265361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.265538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.265563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.265793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.265834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.266052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.266095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.266319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.266348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.266618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.266661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.266849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.266891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.267109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.267152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.267362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.267405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.267604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.267647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.267801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.267826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.267996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.268040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.268235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.268278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.268473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.268501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.268701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.268726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.268984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.269028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.269254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.269297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.269527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.269570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.269751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.269777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.270007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.270051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.270264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.270306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.270525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.270567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.270756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.270781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.271006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.271051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.271252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.271295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.006 qpair failed and we were unable to recover it. 00:34:35.006 [2024-07-14 01:20:24.271529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.006 [2024-07-14 01:20:24.271573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.271902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.271942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.272174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.272218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.272425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.272468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.272698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.272741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.273047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.273091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.273307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.273354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.273566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.273608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.273786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.273811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.274007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.274033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 EAL: No free 2048 kB hugepages reported on node 1 00:34:35.007 [2024-07-14 01:20:24.274226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.274268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.274499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.274542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.274717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.274742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.274974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.275017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.275235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.275278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.275451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.275494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.275671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.275697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.275897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.275923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.276152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.276181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.276397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.276441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.276682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.276725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.276927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.276970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.277199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.277241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.277471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.277514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.277694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.277720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.277924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.277951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.278104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.278129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.278277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.278303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.278459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.278484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.278660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.278687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.278842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.278873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.279048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.279074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.279288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.279314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.279500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.279525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.279731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.279757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.279912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.279938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.280114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.280140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.280324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.280350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.280502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.280528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.280680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.280706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.007 qpair failed and we were unable to recover it. 00:34:35.007 [2024-07-14 01:20:24.280907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.007 [2024-07-14 01:20:24.280932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.281114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.281140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.281315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.281340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.281543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.281569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.281711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.281736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.281946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.281973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.282125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.282155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.282319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.282344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.282519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.282544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.282748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.282772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.282983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.283008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.283152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.283178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.283329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.283354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.283499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.283524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.283698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.283723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.283902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.283929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.284116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.284143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.284347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.284371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.284522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.284548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.284723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.284748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.284937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.284963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.285141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.285167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.285369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.285395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.285542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.285568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.285743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.285768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.285947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.285973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.286146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.286171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.286327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.286352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.286525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.286550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.286731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.286756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.286892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.286919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.287070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.287095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.287274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.287298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.287480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.287506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.287706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.287731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.287886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.287912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.288051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.288076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.288238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.288263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.288412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.288438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.288617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.288643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.008 [2024-07-14 01:20:24.288817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.008 [2024-07-14 01:20:24.288843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.008 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.289057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.289082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.289256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.289282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.289451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.289477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.289653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.289679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.289851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.289881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.290031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.290062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.290271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.290296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.290494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.290520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.290668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.290693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.290864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.290895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.291101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.291126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.291311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.291336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.291488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.291513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.291656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.291681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.291854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.291898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.292077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.292103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.292319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.292344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.292489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.292514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.292659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.292684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.292837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.292862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.293026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.293051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.293203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.293228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.293395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.293420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.293594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.293620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.293826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.293852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.294008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.294033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.294211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.294236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.294382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.294407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.294578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.294603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.294749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.294774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.294950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.294977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.295153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.295191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.295347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.295372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.295546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.295570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.295746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.295772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.295975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.296001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.296178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.296204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.296355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.296381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.296563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.296589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.296788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.296814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.009 qpair failed and we were unable to recover it. 00:34:35.009 [2024-07-14 01:20:24.296994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.009 [2024-07-14 01:20:24.297021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.297198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.297225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.297407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.297432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.297605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.297630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.297811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.297838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.297988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.298018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.298165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.298190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.298388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.298413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.298590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.298616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.298814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.298839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.299029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.299055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.299207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.299232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.299406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.299431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.299603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.299629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.299829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.299854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.300025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.300050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.300251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.300276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.300471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.300496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.300650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.300675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.300860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.300890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.301078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.301104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.301280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.301305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.301484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.301508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.301664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.301689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.301851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.301883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.302068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.302092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.302244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.302268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.302464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.302490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.302664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.302690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.302869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.302895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.303074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.303098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.303298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.303323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.303531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.303556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.303756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.303782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.303959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.303986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.304160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.010 [2024-07-14 01:20:24.304186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.010 qpair failed and we were unable to recover it. 00:34:35.010 [2024-07-14 01:20:24.304362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.304387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.304562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.304588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.304770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.304796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.304945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.304985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.305176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.305216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.305404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.305430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.305632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.305656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.305804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.305831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.305988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.306013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.306169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.306203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.306405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.306430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.306579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.306604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.306816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.306841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.307020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.307046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.307195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.307221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.307423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.307448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.307647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.307672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.307853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.307885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.308063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.308089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.308263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.308290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.308470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.308495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.308685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.308710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.308856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.308889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.309098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.309115] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:35.011 [2024-07-14 01:20:24.309124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.309332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.309358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.309532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.309558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.309761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.309786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.309936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.309962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.310169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.310195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.310368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.310394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.310549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.310575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.310751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.310776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.310957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.310983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.311152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.311178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.311352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.311378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.311526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.311550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.311754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.311778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.311957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.311983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.312163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.312189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.312360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.011 [2024-07-14 01:20:24.312386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.011 qpair failed and we were unable to recover it. 00:34:35.011 [2024-07-14 01:20:24.312588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.312613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.312783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.312808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.312967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.312993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.313174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.313200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.313353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.313377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.313559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.313584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.313782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.313807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.314064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.314091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.314268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.314293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.314504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.314530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.314712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.314738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.314939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.314965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.315147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.315172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.315360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.315386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.315532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.315558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.315732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.315758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.315934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.315959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.316105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.316130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.316301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.316327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.316481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.316507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.316659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.316684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.316861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.316891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.317072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.317102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.317253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.317279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.317462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.317487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.317693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.317718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.317970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.317996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.318214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.318239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.318412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.318437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.318612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.318639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.318815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.318840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.319026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.319051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.319263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.319289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.319435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.319460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.319613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.319639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.319886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.319913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.320125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.320150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.320354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.012 [2024-07-14 01:20:24.320378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.012 qpair failed and we were unable to recover it. 00:34:35.012 [2024-07-14 01:20:24.320529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.320554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.320753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.320778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.320931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.320958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.321120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.321148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.321351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.321377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.321624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.321649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.321830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.321856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.322072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.322098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.322257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.322283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.322482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.322508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.322686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.322710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.322892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.322917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.323073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.323099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.323284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.323311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.323494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.323520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.323699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.323725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.323879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.323905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.324111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.324137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.324314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.324341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.324521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.324547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.324725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.324750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.324908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.324934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.325113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.325139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.325314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.325340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.325516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.325547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.325722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.325748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.325964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.325989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.326163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.326189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.326334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.326361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.326514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.326540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.326747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.326773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.326957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.326983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.327157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.327192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.327365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.327391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.327570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.327596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.327775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.327801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.327981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.328007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.328160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.328185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.328342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.328367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.328544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.328570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.013 qpair failed and we were unable to recover it. 00:34:35.013 [2024-07-14 01:20:24.328753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.013 [2024-07-14 01:20:24.328779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.328956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.328981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.329166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.329191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.329377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.329403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.329574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.329601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.329739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.329765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.329941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.329967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.330145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.330171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.330320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.330347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.330523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.330549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.330728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.330753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.330959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.330985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.331157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.331191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.331367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.331394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.331572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.331598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.331777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.331802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.331982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.332008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.332213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.332239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.332415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.332441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.332588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.332613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.332773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.332799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.332977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.333004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.333181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.333207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.333408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.333434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.333632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.333662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.333832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.333856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.334024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.334050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.334232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.334258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.334459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.334484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.334636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.334661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.334879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.334905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.335079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.335105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.335284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.335310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.335456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.335480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.335657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.335682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.335859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.335910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.336089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.336114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.014 [2024-07-14 01:20:24.336265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.014 [2024-07-14 01:20:24.336290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.014 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.336450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.336475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.336644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.336669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.336842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.336874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.337057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.337083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.337288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.337313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.337514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.337539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.337714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.337739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.337915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.337942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.338118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.338144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.338283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.338308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.338450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.338475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.338649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.338675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.338856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.338889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.339096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.339122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.339282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.339307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.339486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.339512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.339718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.339744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.339949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.339975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.340117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.340142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.340321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.340346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.340500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.340527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.340702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.340728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.340877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.340903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.341085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.341113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.341265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.341292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.341447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.341472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.341649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.341680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.341895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.341922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.342101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.342128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.342308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.342333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.342482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.342508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.342690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.342715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.342893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.342920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.343122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.343149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.343301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.343325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.343476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.343501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.343711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.343737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.015 [2024-07-14 01:20:24.343917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.015 [2024-07-14 01:20:24.343944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.015 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.344122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.344148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.344319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.344344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.344527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.344553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.344758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.344783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.344940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.344966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.345143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.345169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.345347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.345373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.345550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.345576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.345725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.345751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.345927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.345953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.346133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.346158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.346303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.346329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.346482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.346509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.346708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.346733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.346941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.346967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.347143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.347176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.347325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.347351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.347527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.347553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.347728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.347753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.347930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.347956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.348106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.348132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.348305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.348331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.348507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.348533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.348705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.348730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.348908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.348936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.349137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.349163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.349345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.349371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.349551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.349577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.349759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.349785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.349969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.349995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.350171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.350196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.350398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.350425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.350609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.350635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.350812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.350838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.351045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.351072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.351219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.351245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.351395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.351421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.351629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.351655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.351835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.351861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.352051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.352078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.016 qpair failed and we were unable to recover it. 00:34:35.016 [2024-07-14 01:20:24.352221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.016 [2024-07-14 01:20:24.352246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.352448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.352475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.352682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.352709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.352889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.352916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.353087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.353113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.353289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.353314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.353491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.353516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.353719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.353744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.353919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.353945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.354121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.354147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.354328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.354353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.354532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.354558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.354736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.354761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.354978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.355005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.355209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.355235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.355388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.355419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.355576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.355602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.355799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.355825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.356032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.356060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.356236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.356262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.356424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.356450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.356648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.356674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.356849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.356880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.357082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.357108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.357280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.357305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.357473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.357500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.357762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.357788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.357948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.357974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.358131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.358156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.358330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.358357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.358544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.358569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.358719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.358744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.358897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.358923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.359127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.359153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.359372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.359398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.359548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.359574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.359786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.359811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.359965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.359991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.360177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.360203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.360354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.360380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.360552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.017 [2024-07-14 01:20:24.360578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.017 qpair failed and we were unable to recover it. 00:34:35.017 [2024-07-14 01:20:24.360732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.360757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.360942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.360969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.361173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.361199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.361346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.361372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.361543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.361569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.361749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.361773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.361984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.362010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.362265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.362291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.362495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.362521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.362707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.362733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.362890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.362917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.363120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.363146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.363329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.363355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.363532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.363557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.363732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.363762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.363941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.363967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.364228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.364254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.364460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.364487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.364685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.364711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.364864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.364894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.365070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.365096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.365250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.365275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.365484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.365510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.365691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.365715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.365894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.365920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.366131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.366157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.366310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.366335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.366484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.366509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.366688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.366713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.366920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.366946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.367098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.367124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.367331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.367357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.367560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.367585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.367732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.367756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.367933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.367959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.368163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.368189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.368342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.368369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.368577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.368603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.368785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.368811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.368995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.369020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.018 [2024-07-14 01:20:24.369197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.018 [2024-07-14 01:20:24.369222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.018 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.369376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.369401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.369543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.369568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.369770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.369795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.369948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.369974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.370155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.370181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.370356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.370381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.370566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.370591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.370736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.370761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.370944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.370971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.371167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.371192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.371367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.371393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.371562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.371588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.371766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.371793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.371953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.371983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.372187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.372214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.372387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.372413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.372590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.372615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.372819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.372844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.373034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.373060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.373201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.373227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.373411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.373436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.373639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.373664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.373824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.373849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.374039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.374065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.374247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.374274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.374461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.374488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.374667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.374693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.374878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.374904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.375115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.375141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.375338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.375365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.375543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.375581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.375762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.375788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.375966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.375993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.376151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.376177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.019 [2024-07-14 01:20:24.376363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.019 [2024-07-14 01:20:24.376389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.019 qpair failed and we were unable to recover it. 00:34:35.020 [2024-07-14 01:20:24.376562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.020 [2024-07-14 01:20:24.376588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.020 qpair failed and we were unable to recover it. 00:34:35.020 [2024-07-14 01:20:24.376769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.020 [2024-07-14 01:20:24.376795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.020 qpair failed and we were unable to recover it. 00:34:35.020 [2024-07-14 01:20:24.376970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.020 [2024-07-14 01:20:24.376996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.020 qpair failed and we were unable to recover it. 00:34:35.020 [2024-07-14 01:20:24.377141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.020 [2024-07-14 01:20:24.377168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.020 qpair failed and we were unable to recover it. 00:34:35.020 [2024-07-14 01:20:24.377344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.020 [2024-07-14 01:20:24.377370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.020 qpair failed and we were unable to recover it. 00:34:35.020 [2024-07-14 01:20:24.377575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.020 [2024-07-14 01:20:24.377602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.020 qpair failed and we were unable to recover it. 00:34:35.020 [2024-07-14 01:20:24.377810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.020 [2024-07-14 01:20:24.377835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.020 qpair failed and we were unable to recover it. 00:34:35.020 [2024-07-14 01:20:24.378043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.020 [2024-07-14 01:20:24.378069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.020 qpair failed and we were unable to recover it. 00:34:35.020 [2024-07-14 01:20:24.378215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.020 [2024-07-14 01:20:24.378248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.020 qpair failed and we were unable to recover it. 00:34:35.020 [2024-07-14 01:20:24.378457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.020 [2024-07-14 01:20:24.378482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.020 qpair failed and we were unable to recover it. 00:34:35.020 [2024-07-14 01:20:24.378680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.020 [2024-07-14 01:20:24.378706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.020 qpair failed and we were unable to recover it. 00:34:35.020 [2024-07-14 01:20:24.378881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.020 [2024-07-14 01:20:24.378919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.020 qpair failed and we were unable to recover it. 00:34:35.020 [2024-07-14 01:20:24.379070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.020 [2024-07-14 01:20:24.379107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.020 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.379286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.379312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.379514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.379541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.379686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.379712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.379886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.379912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.380083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.380108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.380290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.380320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.380466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.380492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.380672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.380699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.380850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.380881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.381044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.381069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.381262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.381288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.381440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.381466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.381643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.381670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.381813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.381840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.382041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.382067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.382269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.382293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.382483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.382508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.382686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.382712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.382895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.382921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.383126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.383152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.383337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.383362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.383539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.383565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.383769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.383793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.293 [2024-07-14 01:20:24.383953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.293 [2024-07-14 01:20:24.383979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.293 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.384159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.384184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.384361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.384387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.384558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.384584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.384736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.384761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.384942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.384969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.385121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.385147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.385324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.385350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.385522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.385547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.385756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.385782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.385926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.385952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.386129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.386156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.386333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.386359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.386513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.386540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.386713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.386738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.386938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.386964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.387140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.387166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.387317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.387343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.387517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.387543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.387689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.387715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.387919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.387945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.388126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.388151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.388334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.388369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.388550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.388576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.388726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.388752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.388929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.388956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.389133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.389160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.389312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.389339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.389540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.389566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.389771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.389797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.389973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.390000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.390181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.390207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.390383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.390410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.390585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.390612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.390814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.390839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.390990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.391016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.391226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.391253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.391431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.391457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.391630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.391656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.391806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.391831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.392011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.392037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.294 qpair failed and we were unable to recover it. 00:34:35.294 [2024-07-14 01:20:24.392185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.294 [2024-07-14 01:20:24.392211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.392381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.392408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.392559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.392585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.392785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.392811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.392965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.392991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.393165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.393191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.393338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.393364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.393538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.393563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.393716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.393742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.393942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.393968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.394113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.394139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.394338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.394380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.394565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.394593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.394751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.394777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.394963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.394990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.395137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.395163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.395370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.395396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.395572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.395597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.395743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.395769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.395951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.395979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.396186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.396211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.396363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.396393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.396547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.396574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.396724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.396751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.396939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.396965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.397110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.397136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.397319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.397346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.397491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.397531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.397716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.397741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.397922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.397949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.398099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.398125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.398364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.398390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.398625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.398651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.398802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.398827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.399069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.399096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.399246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.399272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.399420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.399447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.399589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.399614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.399758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.399783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.295 qpair failed and we were unable to recover it. 00:34:35.295 [2024-07-14 01:20:24.399900] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:35.295 [2024-07-14 01:20:24.399926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.295 [2024-07-14 01:20:24.399935] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:35.295 [2024-07-14 01:20:24.399950] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the[2024-07-14 01:20:24.399951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b9 only 00:34:35.295 0 with addr=10.0.0.2, port=4420 00:34:35.295 [2024-07-14 01:20:24.399965] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.399977] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:35.296 [2024-07-14 01:20:24.400041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:35.296 [2024-07-14 01:20:24.400133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.400158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.400072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:35.296 [2024-07-14 01:20:24.400308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.400119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:35.296 [2024-07-14 01:20:24.400122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:35.296 [2024-07-14 01:20:24.400335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.400519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.400544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.400716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.400742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.400917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.400943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.401126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.401151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.401299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.401326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.401587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.401612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.401779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.401805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.401993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.402020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.402170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.402198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.402341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.402368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.402549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.402574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.402722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.402746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.402905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.402931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.403082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.403107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.403281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.403307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.403576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.403602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.403743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.403774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.403995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.404021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.404170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.404197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.404377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.404402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.404574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.404600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.404760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.404786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.404943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.404970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.405127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.405152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.405313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.405339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.405505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.405531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.405700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.405727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.405909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.405935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.406074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.406099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.406241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.406266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.406416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.406441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.406586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.406612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.406782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.406808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.406952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.406978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.407149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.407174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.407350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.407375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.296 [2024-07-14 01:20:24.407523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.296 [2024-07-14 01:20:24.407550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.296 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.407703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.407728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.407928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.407955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.408111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.408138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.408309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.408336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.408510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.408535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.408708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.408733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.408944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.408970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.409179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.409205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.409348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.409374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.409548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.409574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.409723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.409748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.409928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.409955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.410108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.410134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.410300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.410325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.410502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.410527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.410701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.410727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.410904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.410930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.411122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.411148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.411299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.411325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.411496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.411526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.411706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.411732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.411902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.411929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.412090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.412115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.412300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.412328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.412486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.412512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.412660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.412686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.412892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.412919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.413093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.413119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.413265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.413290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.413434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.413460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.413637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.413663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.413833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.413858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.414020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.297 [2024-07-14 01:20:24.414045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.297 qpair failed and we were unable to recover it. 00:34:35.297 [2024-07-14 01:20:24.414196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.414222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.414428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.414455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.414631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.414658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.414829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.414856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.415004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.415029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.415227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.415253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.415393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.415420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.415579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.415605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.415760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.415786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.415966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.415992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.416135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.416160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.416411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.416437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.416586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.416612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.416801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.416827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.416994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.417021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.417172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.417197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.417398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.417423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.417600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.417626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.417773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.417799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.417947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.417973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.418121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.418146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.418322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.418348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.418495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.418520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.418677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.418703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.418842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.418874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.419038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.419064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.419228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.419257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.419446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.419472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.419619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.419645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.419791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.419816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.419972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.419998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.420164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.420191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.420397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.420423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.420603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.420628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.420795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.420821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.421002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.421028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.421233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.421259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.421405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.421431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.421569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.421594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.421745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.421770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.422050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.298 [2024-07-14 01:20:24.422076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.298 qpair failed and we were unable to recover it. 00:34:35.298 [2024-07-14 01:20:24.422279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.422304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.422454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.422479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.422624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.422649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.422822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.422848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.423027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.423052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.423331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.423356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.423529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.423555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.423698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.423723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.423901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.423927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.424098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.424123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.424269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.424294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.424441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.424466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.424642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.424668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.424828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.424852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.425002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.425027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.425182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.425210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.425357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.425382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.425545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.425570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.425727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.425753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.425906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.425933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.426074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.426099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.426249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.426275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.426422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.426449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.426593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.426619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.426761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.426787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.426932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.426967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.427143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.427169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.427373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.427399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.427548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.427573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.427723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.427750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.427912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.427938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.428140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.428165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.428314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.428339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.428483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.428508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.428659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.428685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.428857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.428889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.429068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.429092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.429242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.429267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.429410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.429435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.429576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.429601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.429782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.299 [2024-07-14 01:20:24.429808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.299 qpair failed and we were unable to recover it. 00:34:35.299 [2024-07-14 01:20:24.429995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.430022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.430190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.430215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.430388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.430413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.430589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.430614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.430761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.430786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.430936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.430963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.431114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.431141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.431325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.431351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.431503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.431529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.431669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.431695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.431873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.431899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.432062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.432088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.432387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.432413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.432591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.432616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.432788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.432814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.432972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.432997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.433151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.433178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.433350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.433376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.433529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.433555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.433716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.433742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.433920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.433947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.434094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.434122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.434283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.434309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.434513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.434538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.434707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.434737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.434891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.434918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.435096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.435122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.435279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.435306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.435480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.435506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.435650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.435677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.435830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.435856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.436107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.436132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.436276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.436304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.436482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.436507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.436654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.436680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.436831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.436857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.437035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.437061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.437237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.437263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.437521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.437547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.437750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.437776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.437948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.300 [2024-07-14 01:20:24.437974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.300 qpair failed and we were unable to recover it. 00:34:35.300 [2024-07-14 01:20:24.438177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.438202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.438372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.438397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.438534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.438560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.438743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.438769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.438949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.438975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.439140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.439166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.439346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.439371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.439548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.439574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.439746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.439771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.439952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.439978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.440152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.440179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.440459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.440485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.440637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.440662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.440830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.440856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.441103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.441129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.441280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.441306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.441448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.441475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.441652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.441679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.441841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.441872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.442016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.442042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.442248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.442274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.442418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.442445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.442625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.442651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.442807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.442837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.443024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.443050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.443201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.443227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.443421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.443447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.443595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.443621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.443766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.443792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.443981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.444008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.444188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.444214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.444371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.444396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.444572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.444599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.444762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.444788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.444968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.444994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.301 [2024-07-14 01:20:24.445170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.301 [2024-07-14 01:20:24.445195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.301 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.445365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.445391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.445566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.445593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.445769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.445794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.445950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.445976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.446148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.446180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.446342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.446368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.446571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.446597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.446745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.446771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.447009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.447035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.447192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.447217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.447366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.447391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.447545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.447570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.447720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.447744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.447920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.447946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.448123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.448148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.448292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.448316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.448459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.448484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.448649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.448675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.448819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.448844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.449002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.449028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.449169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.449195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.449338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.449362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.449569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.449595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.449743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.449770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.449962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.449989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.450139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.450166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.450367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.450392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.450554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.450585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.450753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.450778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.450936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.450962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.451137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.451163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.451300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.451324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.451509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.451534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.451704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.451729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.451919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.451947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.452093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.452118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.452307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.452333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.452479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.452505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.452680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.452706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.452879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.452905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.302 [2024-07-14 01:20:24.453050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.302 [2024-07-14 01:20:24.453075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.302 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.453226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.453252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.453425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.453450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.453622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.453648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.453782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.453807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.453997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.454024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.454173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.454199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.454370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.454396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.454550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.454574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.454758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.454784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.454935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.454962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.455169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.455195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.455340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.455366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.455517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.455543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.455689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.455714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.455894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.455921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.456071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.456097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.456242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.456268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.456415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.456440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.456605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.456631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.456780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.456806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.456962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.456988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.457149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.457174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.457326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.457354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.457528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.457554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.457702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.457727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.457909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.457935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.458112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.458142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.458431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.458456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.458602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.458627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.458787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.458814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.458962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.458988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.459135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.459161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.459439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.459465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.459623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.459649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.459791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.459817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.459987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.460013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.460185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.460211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.460355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.460380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.460519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.460544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.460720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.460745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.303 qpair failed and we were unable to recover it. 00:34:35.303 [2024-07-14 01:20:24.460922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.303 [2024-07-14 01:20:24.460948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.461202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.461228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.461398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.461424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.461595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.461621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.461775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.461800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.461964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.461990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.462159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.462185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.462366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.462390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.462576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.462600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.462747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.462773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.462926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.462953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.463120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.463145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.463364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.463389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.463566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.463592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.463735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.463760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.463937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.463964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.464127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.464152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.464327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.464353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.464498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.464524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.464672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.464697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.464841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.464871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.465050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.465076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.465255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.465281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.465416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.465441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.465609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.465635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.465817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.465843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.466024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.466059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.466258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.466284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.466427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.466452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.466614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.466639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.466850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.466904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.467081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.467107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.467298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.467324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.467502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.467527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.467674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.467699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.467851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.467883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.468055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.468080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.468254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.468279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.468444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.468470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.468640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.468666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.468838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.304 [2024-07-14 01:20:24.468864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.304 qpair failed and we were unable to recover it. 00:34:35.304 [2024-07-14 01:20:24.469028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.469053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.469232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.469258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.469433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.469459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.469632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.469658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.469813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.469838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.470009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.470035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.470186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.470211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.470361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.470386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.470564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.470589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.470747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.470773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.470925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.470952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.471098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.471123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.471294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.471325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.471491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.471517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.471680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.471706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.471881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.471907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.472081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.472106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.472279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.472305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.472477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.472503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.472640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.472666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.472846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.472876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.473034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.473061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.473202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.473227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.473392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.473418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.473587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.473613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.473751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.473776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.473955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.473981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.474132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.474157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.474363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.474389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.474673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.474699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.474892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.474918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.475077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.475103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.475309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.475334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.475485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.475510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.475670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.475695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.475938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.475965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.476176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.476202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.476469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.476494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.476669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.476695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.476854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.476885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.477057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.305 [2024-07-14 01:20:24.477082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.305 qpair failed and we were unable to recover it. 00:34:35.305 [2024-07-14 01:20:24.477237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.477263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.477458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.477484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.477658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.477683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.477871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.477898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.478070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.478096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.478244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.478269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.478434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.478460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.478641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.478666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.478825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.478851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.479034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.479059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.479203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.479229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.479392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.479422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.479605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.479631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.479787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.479813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.479978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.480004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.480147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.480172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.480340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.480366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.480539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.480564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.480710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.480735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.480902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.480929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.481121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.481147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.481301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.481328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.481473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.481499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.481679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.481705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.481859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.481890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.482092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.482119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.482281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.482311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.482491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.482516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.482665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.482691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.482889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.482915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.483057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.483082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.483293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.483319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.483466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.483492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.483638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.483665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.483841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.483873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.484053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.306 [2024-07-14 01:20:24.484080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.306 qpair failed and we were unable to recover it. 00:34:35.306 [2024-07-14 01:20:24.484227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.484254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.484405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.484432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.484596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.484622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.484771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.484797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.484967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.484993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.485144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.485172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.485313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.485339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.485545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.485570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.485732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.485759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.485971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.485997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.486147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.486172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.486313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.486339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.486490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.486516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.486699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.486724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.486893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.486919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.487109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.487140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.487310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.487337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.487512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.487539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.487690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.487725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.487909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.487935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.488112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.488139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.488287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.488313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.488494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.488520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.488667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.488692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.488838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.488864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.489047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.489073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.489228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.489254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.489419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.489444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.489597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.489623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.489797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.489823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.490012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.490038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.490184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.490210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.490384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.490411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.490561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.490587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.490745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.490770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.490957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.490984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.491159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.491186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.491375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.491400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.491581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.491606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.491756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.491783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.491930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.307 [2024-07-14 01:20:24.491956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.307 qpair failed and we were unable to recover it. 00:34:35.307 [2024-07-14 01:20:24.492109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.492135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.492294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.492321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.492470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.492496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.492669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.492696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.492838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.492862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.493073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.493098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.493257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.493283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.493455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.493480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.493645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.493670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.493826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.493852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.494030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.494056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.494225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.494250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.494401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.494425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.494601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.494626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.494803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.494833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.494996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.495021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.495173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.495198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.495367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.495393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.495568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.495594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.495737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.495763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.495916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.495942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.496087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.496112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.496307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.496333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.496483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.496510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.496682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.496708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.496860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.496890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.497068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.497095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.497294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.497320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.497505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.497531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.497671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.497698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.497844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.497874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.498049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.498075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.498213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.498239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.498397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.498422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.498592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.498617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.498773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.498798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.498973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.498999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.499170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.499196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.499371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.499396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.499593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.499619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.499822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.308 [2024-07-14 01:20:24.499848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.308 qpair failed and we were unable to recover it. 00:34:35.308 [2024-07-14 01:20:24.500020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.500047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.500194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.500219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.500387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.500413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.500586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.500612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.500783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.500810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.500988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.501014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.501213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.501238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.501419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.501445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.501597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.501623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.501798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.501824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.502004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.502030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.502209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.502234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.502372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.502397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.502596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.502626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.502804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.502830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.502993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.503018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.503163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.503189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.503364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.503388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.503560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.503586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.503738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.503762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.503917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.503944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.504096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.504121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.504298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.504323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.504499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.504526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.504726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.504751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.504917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.504944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.505121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.505145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.505327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.505352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.505516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.505542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.505732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.505757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.505938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.505965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.506140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.506166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.506337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.506362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.506545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.506571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.506742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.506767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.506914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.506939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.507115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.507140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.507297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.507323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.507490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.507515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.507682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.309 [2024-07-14 01:20:24.507706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.309 qpair failed and we were unable to recover it. 00:34:35.309 [2024-07-14 01:20:24.507884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.507910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.508087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.508113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.508282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.508308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.508454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.508479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.508648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.508672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.508819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.508844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.508994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.509020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.509216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.509241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.509393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.509418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.509586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.509612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.509799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.509825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.509972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.509997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.510148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.510174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.510345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.510375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.510563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.510588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.510743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.510768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.510944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.510970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.511113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.511138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.511284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.511308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.511484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.511509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.511663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.511688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.511840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.511871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.512050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.512075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.512251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.512278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.512430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.512456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.512636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.512660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.512803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.512828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.512989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.513015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.513183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.513209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.513350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.513375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.513516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.513542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.513741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.513767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.513913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.513938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.514104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.514130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.514305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.514331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.514499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.310 [2024-07-14 01:20:24.514524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.310 qpair failed and we were unable to recover it. 00:34:35.310 [2024-07-14 01:20:24.514692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.514717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.514883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.514909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.515073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.515099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.515269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.515295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.515479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.515505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.515711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.515735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.515884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.515910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.516079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.516105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.516284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.516310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.516455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.516480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.516659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.516685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.516833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.516858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.517044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.517069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.517212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.517237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.517423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.517449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.517598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.517623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.517797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.517823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.518002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.518032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.518181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.518206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.518360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.518386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.518564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.518590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.518755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.518779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.518960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.518987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.519142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.519168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.519338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.519364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.519506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.519531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.519698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.519724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.519895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.519922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.520089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.520114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.520279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.520305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.520478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.520504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.520690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.520716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.520859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.520889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.521031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.521057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.521232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.521258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.521446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.521472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.521612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.521638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.521817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.521843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.522026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.522053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.522228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.522254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.522421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.311 [2024-07-14 01:20:24.522447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.311 qpair failed and we were unable to recover it. 00:34:35.311 [2024-07-14 01:20:24.522593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.522620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.522786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.522812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.522987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.523013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5f20000b90 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.523177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.523219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.523379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.523406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.523557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.523582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.523728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.523753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.523901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.523927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.524080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.524105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.524263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.524290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.524430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.524455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.524622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.524649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.524819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.524845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.525004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.525029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.525174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.525200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.525343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.525369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.525538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.525563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.525737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.525762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.525906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.525932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.526092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.526117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.526332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.526357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.526512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.526537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.526679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.526704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.526848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.526879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.527026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.527050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.527186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.527222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.527372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.527398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.527565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.527590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.527764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.527789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.527941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.527968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.528113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.528138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.528289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.528313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.528457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.528482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.528638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.528663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.528821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.528846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.528998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.529024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.529172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.529197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.529337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.529363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.529555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.529580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.529721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.529747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.529892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.312 [2024-07-14 01:20:24.529918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.312 qpair failed and we were unable to recover it. 00:34:35.312 [2024-07-14 01:20:24.530070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.530095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.530247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.530272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.530409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.530435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.530599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.530625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.530784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.530809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.530969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.530995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.531169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.531194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.531364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.531390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.531555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.531580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.531760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.531786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.531947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.531973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.532123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.532148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.532305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.532330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.532476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.532501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.532674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.532699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.532862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.532891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.533062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.533091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.533242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.533267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.533414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.533439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.533605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.533630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.533775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.533800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.533956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.533982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:35.313 [2024-07-14 01:20:24.534121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.534147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:35.313 [2024-07-14 01:20:24.534294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.534320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:35.313 [2024-07-14 01:20:24.534466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.534492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:35.313 [2024-07-14 01:20:24.534638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.534663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.534838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.534863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.535016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.535042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.535192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.535218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.535360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.535386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.535555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.535580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.535743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.535777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.535954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.535981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.536124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.536149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.536289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.536315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.536462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.536488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.536642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.536667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.536826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.536851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.537015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.537041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.313 [2024-07-14 01:20:24.537181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.313 [2024-07-14 01:20:24.537206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.313 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.537366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.537391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.537559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.537589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.537732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.537757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.537913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.537940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.538102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.538129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.538291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.538316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.538498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.538524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.538699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.538724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.538902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.538928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.539071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.539098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.539251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.539277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.539433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.539459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.539617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.539643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.539814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.539840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.540007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.540034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.540183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.540220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.540410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.540435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.540617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.540643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.540787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.540812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.541007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.541035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.541178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.541204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.541364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.541389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.541567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.541593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.541762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.541788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.541956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.541982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.542124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.542149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.542304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.542329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.542485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.542512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.542659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.542685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.542846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.542897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.543040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.543066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.543225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.543250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.543430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.543456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.543628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.543653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.543792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.314 [2024-07-14 01:20:24.543817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.314 qpair failed and we were unable to recover it. 00:34:35.314 [2024-07-14 01:20:24.544000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.544026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.544173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.544198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.544337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.544362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.544511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.544537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.544684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.544709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.544913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.544939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.545092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.545118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.545272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.545297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.545446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.545471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.545621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.545646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.545806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.545831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.545989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.546015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.546187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.546212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.546423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.546448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.546618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.546644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.546816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.546841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.546990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.547017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.547178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.547204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.547382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.547408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.547548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.547575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.547727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.547753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.547945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.547971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.548119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.548146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.548294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.548320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.548489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.548522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.548686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.548711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.548887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.548913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.549058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.549084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.549303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.549328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.549508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.549534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.549705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.549730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.549902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.549928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.550070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.550096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.550274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.550301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.550471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.550500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.550670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.550696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.550835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.550886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 [2024-07-14 01:20:24.551036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.551062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:35.315 [2024-07-14 01:20:24.551226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.551252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:35.315 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.315 [2024-07-14 01:20:24.551417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.315 [2024-07-14 01:20:24.551442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.315 qpair failed and we were unable to recover it. 00:34:35.315 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:35.316 [2024-07-14 01:20:24.551584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.551614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.551764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.551789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.551944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.551971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.552120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.552145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.552335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.552360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.552532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.552557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.552702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.552727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.552932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.552958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.553135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.553160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.553301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.553326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.553475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.553501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.553646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.553671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.553844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.553873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.554022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.554047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.554190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.554215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.554370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.554395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.554539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.554564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.554718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.554743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.554893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.554919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.555091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.555116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.555298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.555323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.555483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.555508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.555683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.555708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.555844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.555875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.556038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.556063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.556242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.556267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.556527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.556552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.556727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.556752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.556901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.556927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.557105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.557131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.557280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.557304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.557451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.557476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.557652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.557677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.557845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.557876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.558045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.558071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.558213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.558239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.558420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.558445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.558592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.558617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.558765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.558790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.558949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.558975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.559126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.316 [2024-07-14 01:20:24.559151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.316 qpair failed and we were unable to recover it. 00:34:35.316 [2024-07-14 01:20:24.559294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.559320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.559482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.559507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.559681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.559706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.559882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.559908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.560050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.560075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.560214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.560239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.560490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.560515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.560668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.560693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.560872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.560898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.561046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.561071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.561225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.561250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.561500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.561525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.561693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.561718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.561893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.561919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.562059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.562085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.562245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.562270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.562452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.562477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.562753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.562777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.562948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.562973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.563154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.563183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.563362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.563386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.563536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.563561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.563732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.563757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.563914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.563939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.564118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.564144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.564312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.564337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.564484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.564512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.564680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.564706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.564880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.564906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.565069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.565095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.565300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.565326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.565504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.565529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.565698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.565723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.565907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.565935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.566085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.566111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.566271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.566296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.566469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.566494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.566644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.566669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.566852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.566883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.567034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.567059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.567202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.317 [2024-07-14 01:20:24.567227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.317 qpair failed and we were unable to recover it. 00:34:35.317 [2024-07-14 01:20:24.567389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.567413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.567573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.567598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.567754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.567779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.567994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.568020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.568178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.568203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.568402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.568431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.568587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.568612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.568780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.568805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.568964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.568990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.569160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.569185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.569365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.569391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.569568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.569593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.569741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.569766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.569921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.569947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.570131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.570158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.570306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.570331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.570508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.570533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.570683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.570708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.570921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.570946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.571096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.571121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.571330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.571355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.571497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.571523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.571673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.571697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.571839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.571863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.572017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.572042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.572198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.572227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.572402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.572427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.572568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.572594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.572795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.572820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.572989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.573015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.573166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.573191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 Malloc0 00:34:35.318 [2024-07-14 01:20:24.573383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.573409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.573566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.573595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.318 [2024-07-14 01:20:24.573740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.573766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:35.318 [2024-07-14 01:20:24.573908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.573934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.318 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:35.318 [2024-07-14 01:20:24.574111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.574139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.318 qpair failed and we were unable to recover it. 00:34:35.318 [2024-07-14 01:20:24.574289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.318 [2024-07-14 01:20:24.574315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.574491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.574516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.574673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.574699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.574843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.574892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.575067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.575092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.575256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.575281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.575438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.575462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.575604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.575629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.575777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.575808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.575978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.576004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.576176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.576201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.576370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.576396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.576541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.576566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.576707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.576732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.576876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.576902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.577044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.577068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 [2024-07-14 01:20:24.577056] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.577216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.577256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.577430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.577455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.577628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.577653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.577804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.577829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.577993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.578018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.578169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.578194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.578347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.578372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.578542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.578567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.578719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.578745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.578925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.578951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.579094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.579119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.579273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.579298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.579449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.579474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.579661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.579686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.579856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.579888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.580064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.580089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.580253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.580278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.580429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.580456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.580638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.580663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.580814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.580841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.580998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.581024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.581167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.581192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.581371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.581396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.319 [2024-07-14 01:20:24.581537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.319 [2024-07-14 01:20:24.581561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.319 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.581713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.581738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.581909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.581935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.582083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.582109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.582274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.582299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.582442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.582467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.582646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.582671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.582844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.582875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.583025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.583050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.583195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.583220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.583370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.583395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.583542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.583566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.583744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.583768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.583921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.583947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.584096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.584121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.584265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.584290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.584431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.584456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.584622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.584647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.584794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.584819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.585012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.585037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.585200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.585225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.320 [2024-07-14 01:20:24.585372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.585398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:35.320 [2024-07-14 01:20:24.585535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.585566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.320 [2024-07-14 01:20:24.585703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:35.320 [2024-07-14 01:20:24.585728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.585889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.585915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.586059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.586085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.586245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.586271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.586449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.586473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.586631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.586656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.586813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.586838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.587005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.587031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.587188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.587212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.587377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.587402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.587557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.587582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.587742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.587767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.587966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.587992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.588141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.588166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.588342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.588367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.588515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.588539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.588710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.588735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.320 qpair failed and we were unable to recover it. 00:34:35.320 [2024-07-14 01:20:24.588883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.320 [2024-07-14 01:20:24.588908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.589053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.589078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.589219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.589244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.589392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.589417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.589595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.589620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.589769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.589794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.589976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.590001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.590149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.590175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.590348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.590372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.590524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.590549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.590706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.590731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.590900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.590925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.591086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.591111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.591254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.591279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.591450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.591475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.591642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.591667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.591807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.591832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.591997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.592023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.592217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.592242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.592432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.592457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.592619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.592644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.592785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.592810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.592963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.592989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.593134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.593160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.321 [2024-07-14 01:20:24.593314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.593340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:35.321 [2024-07-14 01:20:24.593500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.593526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.321 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:35.321 [2024-07-14 01:20:24.593700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.593726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.593881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.593907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.594061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.594088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.594246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.594271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.594429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.594454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.594590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.594615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.594756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.594781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.594935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.594961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.595115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.595141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.595291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.595316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.595464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.595489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.595626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.595651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.595828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.595854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.596014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.321 [2024-07-14 01:20:24.596039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.321 qpair failed and we were unable to recover it. 00:34:35.321 [2024-07-14 01:20:24.596210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.596235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.596392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.596417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.596597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.596622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.596807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.596832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.597001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.597027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.597229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.597254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.597414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.597439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.597579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.597604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.597785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.597811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.597991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.598017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.598178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.598204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.598397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.598422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.598594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.598620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.598759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.598784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.598925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.598952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.599110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.599135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.599309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.599334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.599476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.599501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.599673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.599698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.599870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.599895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.600035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.600060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.600201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.600227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.600403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.600428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.600625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.600650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.600819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.600843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.601016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.601041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.601181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.601206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.322 [2024-07-14 01:20:24.601376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.601401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:35.322 [2024-07-14 01:20:24.601540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.601566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.322 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:35.322 [2024-07-14 01:20:24.601732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.601757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.601927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.601952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.602101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.602126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.602304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.602329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.602480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.602506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.602674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.602699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.602885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.602911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.603051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.603077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.603235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.603260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.603410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.603435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.322 [2024-07-14 01:20:24.603584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.322 [2024-07-14 01:20:24.603609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.322 qpair failed and we were unable to recover it. 00:34:35.323 [2024-07-14 01:20:24.603794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.323 [2024-07-14 01:20:24.603819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.323 qpair failed and we were unable to recover it. 00:34:35.323 [2024-07-14 01:20:24.603970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.323 [2024-07-14 01:20:24.603996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.323 qpair failed and we were unable to recover it. 00:34:35.323 [2024-07-14 01:20:24.604145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.323 [2024-07-14 01:20:24.604170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.323 qpair failed and we were unable to recover it. 00:34:35.323 [2024-07-14 01:20:24.604354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.323 [2024-07-14 01:20:24.604379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.323 qpair failed and we were unable to recover it. 00:34:35.323 [2024-07-14 01:20:24.604516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.323 [2024-07-14 01:20:24.604541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.323 qpair failed and we were unable to recover it. 00:34:35.323 [2024-07-14 01:20:24.604687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.323 [2024-07-14 01:20:24.604712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.323 qpair failed and we were unable to recover it. 00:34:35.323 [2024-07-14 01:20:24.604847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.323 [2024-07-14 01:20:24.604882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.323 qpair failed and we were unable to recover it. 00:34:35.323 [2024-07-14 01:20:24.605033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.323 [2024-07-14 01:20:24.605058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.323 qpair failed and we were unable to recover it. 00:34:35.323 [2024-07-14 01:20:24.605221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:35.323 [2024-07-14 01:20:24.605246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3600 with addr=10.0.0.2, port=4420 00:34:35.323 qpair failed and we were unable to recover it. 00:34:35.323 [2024-07-14 01:20:24.605273] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:35.323 [2024-07-14 01:20:24.607850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.323 [2024-07-14 01:20:24.608038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.323 [2024-07-14 01:20:24.608065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.323 [2024-07-14 01:20:24.608081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.323 [2024-07-14 01:20:24.608094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.323 [2024-07-14 01:20:24.608127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.323 qpair failed and we were unable to recover it. 00:34:35.323 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.323 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:35.323 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.323 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:35.323 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.323 01:20:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1300615 00:34:35.323 [2024-07-14 01:20:24.617683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.323 [2024-07-14 01:20:24.617838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.323 [2024-07-14 01:20:24.617872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.323 [2024-07-14 01:20:24.617889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.323 [2024-07-14 01:20:24.617902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.323 [2024-07-14 01:20:24.617931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.323 qpair failed and we were unable to recover it. 00:34:35.323 [2024-07-14 01:20:24.627683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.323 [2024-07-14 01:20:24.627842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.323 [2024-07-14 01:20:24.627874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.323 [2024-07-14 01:20:24.627890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.323 [2024-07-14 01:20:24.627908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.323 [2024-07-14 01:20:24.627937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.323 qpair failed and we were unable to recover it. 00:34:35.323 [2024-07-14 01:20:24.637767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.323 [2024-07-14 01:20:24.637930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.323 [2024-07-14 01:20:24.637956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.323 [2024-07-14 01:20:24.637970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.323 [2024-07-14 01:20:24.637982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.323 [2024-07-14 01:20:24.638012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.323 qpair failed and we were unable to recover it. 00:34:35.323 [2024-07-14 01:20:24.647669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.323 [2024-07-14 01:20:24.647827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.323 [2024-07-14 01:20:24.647863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.323 [2024-07-14 01:20:24.647886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.323 [2024-07-14 01:20:24.647899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.323 [2024-07-14 01:20:24.647927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.323 qpair failed and we were unable to recover it. 00:34:35.323 [2024-07-14 01:20:24.657673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.323 [2024-07-14 01:20:24.657816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.323 [2024-07-14 01:20:24.657841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.323 [2024-07-14 01:20:24.657855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.323 [2024-07-14 01:20:24.657875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.323 [2024-07-14 01:20:24.657904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.323 qpair failed and we were unable to recover it. 00:34:35.323 [2024-07-14 01:20:24.667732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.323 [2024-07-14 01:20:24.668003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.323 [2024-07-14 01:20:24.668030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.323 [2024-07-14 01:20:24.668044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.323 [2024-07-14 01:20:24.668058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.323 [2024-07-14 01:20:24.668086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.323 qpair failed and we were unable to recover it. 00:34:35.323 [2024-07-14 01:20:24.677745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.323 [2024-07-14 01:20:24.677911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.323 [2024-07-14 01:20:24.677937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.323 [2024-07-14 01:20:24.677951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.323 [2024-07-14 01:20:24.677964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.323 [2024-07-14 01:20:24.677992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.323 qpair failed and we were unable to recover it. 00:34:35.323 [2024-07-14 01:20:24.687767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.323 [2024-07-14 01:20:24.687932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.323 [2024-07-14 01:20:24.687959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.323 [2024-07-14 01:20:24.687984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.324 [2024-07-14 01:20:24.687998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.324 [2024-07-14 01:20:24.688026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.324 qpair failed and we were unable to recover it. 00:34:35.581 [2024-07-14 01:20:24.697757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.581 [2024-07-14 01:20:24.697916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.581 [2024-07-14 01:20:24.697942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.581 [2024-07-14 01:20:24.697956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.581 [2024-07-14 01:20:24.697969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.581 [2024-07-14 01:20:24.697998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.581 qpair failed and we were unable to recover it. 00:34:35.581 [2024-07-14 01:20:24.707923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.581 [2024-07-14 01:20:24.708089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.581 [2024-07-14 01:20:24.708114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.581 [2024-07-14 01:20:24.708128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.581 [2024-07-14 01:20:24.708141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.581 [2024-07-14 01:20:24.708169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.581 qpair failed and we were unable to recover it. 00:34:35.581 [2024-07-14 01:20:24.717875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.581 [2024-07-14 01:20:24.718031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.582 [2024-07-14 01:20:24.718057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.582 [2024-07-14 01:20:24.718070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.582 [2024-07-14 01:20:24.718089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.582 [2024-07-14 01:20:24.718117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.582 qpair failed and we were unable to recover it. 00:34:35.582 [2024-07-14 01:20:24.727921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.582 [2024-07-14 01:20:24.728094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.582 [2024-07-14 01:20:24.728119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.582 [2024-07-14 01:20:24.728133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.582 [2024-07-14 01:20:24.728146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.582 [2024-07-14 01:20:24.728173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.582 qpair failed and we were unable to recover it. 00:34:35.582 [2024-07-14 01:20:24.737902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.582 [2024-07-14 01:20:24.738054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.582 [2024-07-14 01:20:24.738080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.582 [2024-07-14 01:20:24.738094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.582 [2024-07-14 01:20:24.738107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.582 [2024-07-14 01:20:24.738134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.582 qpair failed and we were unable to recover it. 00:34:35.582 [2024-07-14 01:20:24.747939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.582 [2024-07-14 01:20:24.748140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.582 [2024-07-14 01:20:24.748166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.582 [2024-07-14 01:20:24.748180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.582 [2024-07-14 01:20:24.748193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.582 [2024-07-14 01:20:24.748220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.582 qpair failed and we were unable to recover it. 00:34:35.582 [2024-07-14 01:20:24.757990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.582 [2024-07-14 01:20:24.758145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.582 [2024-07-14 01:20:24.758180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.582 [2024-07-14 01:20:24.758194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.582 [2024-07-14 01:20:24.758206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.582 [2024-07-14 01:20:24.758234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.582 qpair failed and we were unable to recover it. 00:34:35.582 [2024-07-14 01:20:24.768030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.582 [2024-07-14 01:20:24.768174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.582 [2024-07-14 01:20:24.768200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.582 [2024-07-14 01:20:24.768214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.582 [2024-07-14 01:20:24.768226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.582 [2024-07-14 01:20:24.768254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.582 qpair failed and we were unable to recover it. 00:34:35.582 [2024-07-14 01:20:24.778033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.582 [2024-07-14 01:20:24.778173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.582 [2024-07-14 01:20:24.778198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.582 [2024-07-14 01:20:24.778212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.582 [2024-07-14 01:20:24.778225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.582 [2024-07-14 01:20:24.778252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.582 qpair failed and we were unable to recover it. 00:34:35.582 [2024-07-14 01:20:24.788094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.582 [2024-07-14 01:20:24.788253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.582 [2024-07-14 01:20:24.788278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.582 [2024-07-14 01:20:24.788292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.582 [2024-07-14 01:20:24.788305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.582 [2024-07-14 01:20:24.788335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.582 qpair failed and we were unable to recover it. 00:34:35.582 [2024-07-14 01:20:24.798103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.582 [2024-07-14 01:20:24.798270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.582 [2024-07-14 01:20:24.798295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.582 [2024-07-14 01:20:24.798309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.582 [2024-07-14 01:20:24.798322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.582 [2024-07-14 01:20:24.798350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.582 qpair failed and we were unable to recover it. 00:34:35.582 [2024-07-14 01:20:24.808107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.582 [2024-07-14 01:20:24.808272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.582 [2024-07-14 01:20:24.808297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.582 [2024-07-14 01:20:24.808317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.582 [2024-07-14 01:20:24.808331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.582 [2024-07-14 01:20:24.808363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.582 qpair failed and we were unable to recover it. 00:34:35.582 [2024-07-14 01:20:24.818174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.582 [2024-07-14 01:20:24.818327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.582 [2024-07-14 01:20:24.818352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.582 [2024-07-14 01:20:24.818366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.582 [2024-07-14 01:20:24.818379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.582 [2024-07-14 01:20:24.818406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.582 qpair failed and we were unable to recover it. 00:34:35.582 [2024-07-14 01:20:24.828278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.582 [2024-07-14 01:20:24.828431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.582 [2024-07-14 01:20:24.828458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.582 [2024-07-14 01:20:24.828478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.582 [2024-07-14 01:20:24.828499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.582 [2024-07-14 01:20:24.828527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.582 qpair failed and we were unable to recover it. 00:34:35.582 [2024-07-14 01:20:24.838214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.582 [2024-07-14 01:20:24.838364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.582 [2024-07-14 01:20:24.838389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.582 [2024-07-14 01:20:24.838403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.582 [2024-07-14 01:20:24.838416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.582 [2024-07-14 01:20:24.838444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.582 qpair failed and we were unable to recover it. 00:34:35.582 [2024-07-14 01:20:24.848330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.582 [2024-07-14 01:20:24.848481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.582 [2024-07-14 01:20:24.848507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.582 [2024-07-14 01:20:24.848521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.582 [2024-07-14 01:20:24.848534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.582 [2024-07-14 01:20:24.848561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.582 qpair failed and we were unable to recover it. 00:34:35.582 [2024-07-14 01:20:24.858271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.582 [2024-07-14 01:20:24.858433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.582 [2024-07-14 01:20:24.858458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.582 [2024-07-14 01:20:24.858472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.583 [2024-07-14 01:20:24.858485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.583 [2024-07-14 01:20:24.858512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.583 qpair failed and we were unable to recover it. 00:34:35.583 [2024-07-14 01:20:24.868386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.583 [2024-07-14 01:20:24.868537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.583 [2024-07-14 01:20:24.868562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.583 [2024-07-14 01:20:24.868576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.583 [2024-07-14 01:20:24.868589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.583 [2024-07-14 01:20:24.868619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.583 qpair failed and we were unable to recover it. 00:34:35.583 [2024-07-14 01:20:24.878365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.583 [2024-07-14 01:20:24.878557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.583 [2024-07-14 01:20:24.878581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.583 [2024-07-14 01:20:24.878595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.583 [2024-07-14 01:20:24.878608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.583 [2024-07-14 01:20:24.878637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.583 qpair failed and we were unable to recover it. 00:34:35.583 [2024-07-14 01:20:24.888423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.583 [2024-07-14 01:20:24.888572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.583 [2024-07-14 01:20:24.888598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.583 [2024-07-14 01:20:24.888612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.583 [2024-07-14 01:20:24.888625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.583 [2024-07-14 01:20:24.888653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.583 qpair failed and we were unable to recover it. 00:34:35.583 [2024-07-14 01:20:24.898433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.583 [2024-07-14 01:20:24.898585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.583 [2024-07-14 01:20:24.898610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.583 [2024-07-14 01:20:24.898631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.583 [2024-07-14 01:20:24.898644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.583 [2024-07-14 01:20:24.898672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.583 qpair failed and we were unable to recover it. 00:34:35.583 [2024-07-14 01:20:24.908437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.583 [2024-07-14 01:20:24.908584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.583 [2024-07-14 01:20:24.908609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.583 [2024-07-14 01:20:24.908623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.583 [2024-07-14 01:20:24.908636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.583 [2024-07-14 01:20:24.908663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.583 qpair failed and we were unable to recover it. 00:34:35.583 [2024-07-14 01:20:24.918518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.583 [2024-07-14 01:20:24.918709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.583 [2024-07-14 01:20:24.918734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.583 [2024-07-14 01:20:24.918748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.583 [2024-07-14 01:20:24.918761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.583 [2024-07-14 01:20:24.918788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.583 qpair failed and we were unable to recover it. 00:34:35.583 [2024-07-14 01:20:24.928446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.583 [2024-07-14 01:20:24.928630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.583 [2024-07-14 01:20:24.928655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.583 [2024-07-14 01:20:24.928669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.583 [2024-07-14 01:20:24.928682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.583 [2024-07-14 01:20:24.928709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.583 qpair failed and we were unable to recover it. 00:34:35.583 [2024-07-14 01:20:24.938470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.583 [2024-07-14 01:20:24.938611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.583 [2024-07-14 01:20:24.938636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.583 [2024-07-14 01:20:24.938650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.583 [2024-07-14 01:20:24.938663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.583 [2024-07-14 01:20:24.938691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.583 qpair failed and we were unable to recover it. 00:34:35.583 [2024-07-14 01:20:24.948523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.583 [2024-07-14 01:20:24.948670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.583 [2024-07-14 01:20:24.948697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.583 [2024-07-14 01:20:24.948711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.583 [2024-07-14 01:20:24.948724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.583 [2024-07-14 01:20:24.948752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.583 qpair failed and we were unable to recover it. 00:34:35.583 [2024-07-14 01:20:24.958568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.583 [2024-07-14 01:20:24.958718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.583 [2024-07-14 01:20:24.958743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.583 [2024-07-14 01:20:24.958758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.583 [2024-07-14 01:20:24.958771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.583 [2024-07-14 01:20:24.958799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.583 qpair failed and we were unable to recover it. 00:34:35.583 [2024-07-14 01:20:24.968592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.583 [2024-07-14 01:20:24.968744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.583 [2024-07-14 01:20:24.968770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.583 [2024-07-14 01:20:24.968784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.583 [2024-07-14 01:20:24.968797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.583 [2024-07-14 01:20:24.968824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.583 qpair failed and we were unable to recover it. 00:34:35.583 [2024-07-14 01:20:24.978626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.583 [2024-07-14 01:20:24.978779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.583 [2024-07-14 01:20:24.978807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.583 [2024-07-14 01:20:24.978822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.583 [2024-07-14 01:20:24.978835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.583 [2024-07-14 01:20:24.978863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.583 qpair failed and we were unable to recover it. 00:34:35.583 [2024-07-14 01:20:24.988677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.583 [2024-07-14 01:20:24.988823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.583 [2024-07-14 01:20:24.988855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.583 [2024-07-14 01:20:24.988881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.583 [2024-07-14 01:20:24.988896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.583 [2024-07-14 01:20:24.988925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.583 qpair failed and we were unable to recover it. 00:34:35.841 [2024-07-14 01:20:24.998795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.841 [2024-07-14 01:20:24.998955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.841 [2024-07-14 01:20:24.998981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.841 [2024-07-14 01:20:24.998996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.841 [2024-07-14 01:20:24.999009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.841 [2024-07-14 01:20:24.999038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.841 qpair failed and we were unable to recover it. 00:34:35.841 [2024-07-14 01:20:25.008744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.841 [2024-07-14 01:20:25.008905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.841 [2024-07-14 01:20:25.008932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.841 [2024-07-14 01:20:25.008947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.841 [2024-07-14 01:20:25.008959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.841 [2024-07-14 01:20:25.008988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.841 qpair failed and we were unable to recover it. 00:34:35.841 [2024-07-14 01:20:25.018748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.841 [2024-07-14 01:20:25.018905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.841 [2024-07-14 01:20:25.018931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.841 [2024-07-14 01:20:25.018945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.841 [2024-07-14 01:20:25.018958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.841 [2024-07-14 01:20:25.018987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.841 qpair failed and we were unable to recover it. 00:34:35.841 [2024-07-14 01:20:25.028750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.841 [2024-07-14 01:20:25.028902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.841 [2024-07-14 01:20:25.028927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.841 [2024-07-14 01:20:25.028941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.841 [2024-07-14 01:20:25.028955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.841 [2024-07-14 01:20:25.028982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.841 qpair failed and we were unable to recover it. 00:34:35.841 [2024-07-14 01:20:25.038903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.841 [2024-07-14 01:20:25.039056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.841 [2024-07-14 01:20:25.039081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.841 [2024-07-14 01:20:25.039096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.841 [2024-07-14 01:20:25.039108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.841 [2024-07-14 01:20:25.039136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.841 qpair failed and we were unable to recover it. 00:34:35.841 [2024-07-14 01:20:25.048824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.841 [2024-07-14 01:20:25.048992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.841 [2024-07-14 01:20:25.049019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.841 [2024-07-14 01:20:25.049033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.841 [2024-07-14 01:20:25.049046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.841 [2024-07-14 01:20:25.049073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.841 qpair failed and we were unable to recover it. 00:34:35.841 [2024-07-14 01:20:25.058856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.841 [2024-07-14 01:20:25.059020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.841 [2024-07-14 01:20:25.059045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.841 [2024-07-14 01:20:25.059060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.841 [2024-07-14 01:20:25.059073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.841 [2024-07-14 01:20:25.059100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.841 qpair failed and we were unable to recover it. 00:34:35.841 [2024-07-14 01:20:25.068923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.841 [2024-07-14 01:20:25.069079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.841 [2024-07-14 01:20:25.069105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.841 [2024-07-14 01:20:25.069119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.841 [2024-07-14 01:20:25.069132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.842 [2024-07-14 01:20:25.069159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.842 qpair failed and we were unable to recover it. 00:34:35.842 [2024-07-14 01:20:25.078932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.842 [2024-07-14 01:20:25.079081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.842 [2024-07-14 01:20:25.079111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.842 [2024-07-14 01:20:25.079125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.842 [2024-07-14 01:20:25.079138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.842 [2024-07-14 01:20:25.079165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.842 qpair failed and we were unable to recover it. 00:34:35.842 [2024-07-14 01:20:25.088941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.842 [2024-07-14 01:20:25.089086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.842 [2024-07-14 01:20:25.089112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.842 [2024-07-14 01:20:25.089126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.842 [2024-07-14 01:20:25.089139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.842 [2024-07-14 01:20:25.089176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.842 qpair failed and we were unable to recover it. 00:34:35.842 [2024-07-14 01:20:25.098982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.842 [2024-07-14 01:20:25.099128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.842 [2024-07-14 01:20:25.099153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.842 [2024-07-14 01:20:25.099168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.842 [2024-07-14 01:20:25.099181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.842 [2024-07-14 01:20:25.099208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.842 qpair failed and we were unable to recover it. 00:34:35.842 [2024-07-14 01:20:25.109020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.842 [2024-07-14 01:20:25.109168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.842 [2024-07-14 01:20:25.109193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.842 [2024-07-14 01:20:25.109207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.842 [2024-07-14 01:20:25.109222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.842 [2024-07-14 01:20:25.109249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.842 qpair failed and we were unable to recover it. 00:34:35.842 [2024-07-14 01:20:25.119047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.842 [2024-07-14 01:20:25.119198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.842 [2024-07-14 01:20:25.119223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.842 [2024-07-14 01:20:25.119237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.842 [2024-07-14 01:20:25.119250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.842 [2024-07-14 01:20:25.119283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.842 qpair failed and we were unable to recover it. 00:34:35.842 [2024-07-14 01:20:25.129052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.842 [2024-07-14 01:20:25.129193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.842 [2024-07-14 01:20:25.129218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.842 [2024-07-14 01:20:25.129232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.842 [2024-07-14 01:20:25.129245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.842 [2024-07-14 01:20:25.129272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.842 qpair failed and we were unable to recover it. 00:34:35.842 [2024-07-14 01:20:25.139214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.842 [2024-07-14 01:20:25.139365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.842 [2024-07-14 01:20:25.139389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.842 [2024-07-14 01:20:25.139403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.842 [2024-07-14 01:20:25.139416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.842 [2024-07-14 01:20:25.139443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.842 qpair failed and we were unable to recover it. 00:34:35.842 [2024-07-14 01:20:25.149106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.842 [2024-07-14 01:20:25.149252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.842 [2024-07-14 01:20:25.149277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.842 [2024-07-14 01:20:25.149291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.842 [2024-07-14 01:20:25.149304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.842 [2024-07-14 01:20:25.149331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.842 qpair failed and we were unable to recover it. 00:34:35.842 [2024-07-14 01:20:25.159143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.842 [2024-07-14 01:20:25.159305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.842 [2024-07-14 01:20:25.159330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.842 [2024-07-14 01:20:25.159344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.842 [2024-07-14 01:20:25.159357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.842 [2024-07-14 01:20:25.159384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.842 qpair failed and we were unable to recover it. 00:34:35.842 [2024-07-14 01:20:25.169176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.842 [2024-07-14 01:20:25.169337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.842 [2024-07-14 01:20:25.169367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.842 [2024-07-14 01:20:25.169382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.842 [2024-07-14 01:20:25.169394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.842 [2024-07-14 01:20:25.169422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.842 qpair failed and we were unable to recover it. 00:34:35.842 [2024-07-14 01:20:25.179250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.842 [2024-07-14 01:20:25.179422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.842 [2024-07-14 01:20:25.179449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.842 [2024-07-14 01:20:25.179464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.842 [2024-07-14 01:20:25.179477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.842 [2024-07-14 01:20:25.179507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.842 qpair failed and we were unable to recover it. 00:34:35.842 [2024-07-14 01:20:25.189224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.842 [2024-07-14 01:20:25.189418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.842 [2024-07-14 01:20:25.189444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.842 [2024-07-14 01:20:25.189459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.842 [2024-07-14 01:20:25.189471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.842 [2024-07-14 01:20:25.189499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.842 qpair failed and we were unable to recover it. 00:34:35.842 [2024-07-14 01:20:25.199318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.842 [2024-07-14 01:20:25.199487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.842 [2024-07-14 01:20:25.199512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.842 [2024-07-14 01:20:25.199526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.842 [2024-07-14 01:20:25.199539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.842 [2024-07-14 01:20:25.199567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.842 qpair failed and we were unable to recover it. 00:34:35.842 [2024-07-14 01:20:25.209334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.842 [2024-07-14 01:20:25.209484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.842 [2024-07-14 01:20:25.209509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.842 [2024-07-14 01:20:25.209523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.842 [2024-07-14 01:20:25.209536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.842 [2024-07-14 01:20:25.209569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.842 qpair failed and we were unable to recover it. 00:34:35.842 [2024-07-14 01:20:25.219396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.843 [2024-07-14 01:20:25.219537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.843 [2024-07-14 01:20:25.219562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.843 [2024-07-14 01:20:25.219576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.843 [2024-07-14 01:20:25.219589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.843 [2024-07-14 01:20:25.219616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.843 qpair failed and we were unable to recover it. 00:34:35.843 [2024-07-14 01:20:25.229360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.843 [2024-07-14 01:20:25.229510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.843 [2024-07-14 01:20:25.229535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.843 [2024-07-14 01:20:25.229549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.843 [2024-07-14 01:20:25.229562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.843 [2024-07-14 01:20:25.229590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.843 qpair failed and we were unable to recover it. 00:34:35.843 [2024-07-14 01:20:25.239404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.843 [2024-07-14 01:20:25.239562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.843 [2024-07-14 01:20:25.239586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.843 [2024-07-14 01:20:25.239599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.843 [2024-07-14 01:20:25.239610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.843 [2024-07-14 01:20:25.239637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.843 qpair failed and we were unable to recover it. 00:34:35.843 [2024-07-14 01:20:25.249422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.843 [2024-07-14 01:20:25.249591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.843 [2024-07-14 01:20:25.249616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.843 [2024-07-14 01:20:25.249630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.843 [2024-07-14 01:20:25.249643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:35.843 [2024-07-14 01:20:25.249670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.843 qpair failed and we were unable to recover it. 00:34:36.102 [2024-07-14 01:20:25.259480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.102 [2024-07-14 01:20:25.259626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.102 [2024-07-14 01:20:25.259661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.102 [2024-07-14 01:20:25.259676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.102 [2024-07-14 01:20:25.259689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.102 [2024-07-14 01:20:25.259717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.102 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 01:20:25.269481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.103 [2024-07-14 01:20:25.269628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.103 [2024-07-14 01:20:25.269654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.103 [2024-07-14 01:20:25.269668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.103 [2024-07-14 01:20:25.269681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.103 [2024-07-14 01:20:25.269708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 01:20:25.279479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.103 [2024-07-14 01:20:25.279630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.103 [2024-07-14 01:20:25.279655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.103 [2024-07-14 01:20:25.279669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.103 [2024-07-14 01:20:25.279681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.103 [2024-07-14 01:20:25.279711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 01:20:25.289551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.103 [2024-07-14 01:20:25.289704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.103 [2024-07-14 01:20:25.289729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.103 [2024-07-14 01:20:25.289744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.103 [2024-07-14 01:20:25.289756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.103 [2024-07-14 01:20:25.289784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 01:20:25.299543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.103 [2024-07-14 01:20:25.299694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.103 [2024-07-14 01:20:25.299719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.103 [2024-07-14 01:20:25.299733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.103 [2024-07-14 01:20:25.299746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.103 [2024-07-14 01:20:25.299780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 01:20:25.309588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.103 [2024-07-14 01:20:25.309733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.103 [2024-07-14 01:20:25.309758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.103 [2024-07-14 01:20:25.309773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.103 [2024-07-14 01:20:25.309785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.103 [2024-07-14 01:20:25.309813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 01:20:25.319610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.103 [2024-07-14 01:20:25.319763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.103 [2024-07-14 01:20:25.319788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.103 [2024-07-14 01:20:25.319802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.103 [2024-07-14 01:20:25.319815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.103 [2024-07-14 01:20:25.319842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 01:20:25.329642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.103 [2024-07-14 01:20:25.329787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.103 [2024-07-14 01:20:25.329812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.103 [2024-07-14 01:20:25.329826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.103 [2024-07-14 01:20:25.329840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.103 [2024-07-14 01:20:25.329874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 01:20:25.339657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.103 [2024-07-14 01:20:25.339800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.103 [2024-07-14 01:20:25.339825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.103 [2024-07-14 01:20:25.339839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.103 [2024-07-14 01:20:25.339851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.103 [2024-07-14 01:20:25.339885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 01:20:25.349690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.103 [2024-07-14 01:20:25.349832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.103 [2024-07-14 01:20:25.349862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.103 [2024-07-14 01:20:25.349885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.103 [2024-07-14 01:20:25.349898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.103 [2024-07-14 01:20:25.349926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 01:20:25.359819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.103 [2024-07-14 01:20:25.359985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.103 [2024-07-14 01:20:25.360010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.103 [2024-07-14 01:20:25.360024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.103 [2024-07-14 01:20:25.360036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.103 [2024-07-14 01:20:25.360064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 01:20:25.369881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.103 [2024-07-14 01:20:25.370048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.103 [2024-07-14 01:20:25.370073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.103 [2024-07-14 01:20:25.370087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.103 [2024-07-14 01:20:25.370100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.103 [2024-07-14 01:20:25.370127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 01:20:25.379770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.103 [2024-07-14 01:20:25.379927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.103 [2024-07-14 01:20:25.379953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.103 [2024-07-14 01:20:25.379967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.103 [2024-07-14 01:20:25.379980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.103 [2024-07-14 01:20:25.380008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 01:20:25.389816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.103 [2024-07-14 01:20:25.389967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.103 [2024-07-14 01:20:25.389993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.103 [2024-07-14 01:20:25.390008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.103 [2024-07-14 01:20:25.390026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.103 [2024-07-14 01:20:25.390054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 01:20:25.399858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.103 [2024-07-14 01:20:25.400023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.103 [2024-07-14 01:20:25.400048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.103 [2024-07-14 01:20:25.400062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.103 [2024-07-14 01:20:25.400075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.103 [2024-07-14 01:20:25.400103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.103 qpair failed and we were unable to recover it. 00:34:36.103 [2024-07-14 01:20:25.409906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.103 [2024-07-14 01:20:25.410083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.104 [2024-07-14 01:20:25.410110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.104 [2024-07-14 01:20:25.410129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.104 [2024-07-14 01:20:25.410143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.104 [2024-07-14 01:20:25.410172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 01:20:25.420104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.104 [2024-07-14 01:20:25.420267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.104 [2024-07-14 01:20:25.420293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.104 [2024-07-14 01:20:25.420307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.104 [2024-07-14 01:20:25.420319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.104 [2024-07-14 01:20:25.420347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 01:20:25.429970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.104 [2024-07-14 01:20:25.430119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.104 [2024-07-14 01:20:25.430144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.104 [2024-07-14 01:20:25.430158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.104 [2024-07-14 01:20:25.430171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.104 [2024-07-14 01:20:25.430199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 01:20:25.440019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.104 [2024-07-14 01:20:25.440177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.104 [2024-07-14 01:20:25.440202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.104 [2024-07-14 01:20:25.440216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.104 [2024-07-14 01:20:25.440229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.104 [2024-07-14 01:20:25.440257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 01:20:25.450050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.104 [2024-07-14 01:20:25.450195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.104 [2024-07-14 01:20:25.450221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.104 [2024-07-14 01:20:25.450234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.104 [2024-07-14 01:20:25.450247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.104 [2024-07-14 01:20:25.450275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 01:20:25.460070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.104 [2024-07-14 01:20:25.460230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.104 [2024-07-14 01:20:25.460254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.104 [2024-07-14 01:20:25.460268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.104 [2024-07-14 01:20:25.460280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.104 [2024-07-14 01:20:25.460307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 01:20:25.470158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.104 [2024-07-14 01:20:25.470308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.104 [2024-07-14 01:20:25.470333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.104 [2024-07-14 01:20:25.470347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.104 [2024-07-14 01:20:25.470360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.104 [2024-07-14 01:20:25.470387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 01:20:25.480156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.104 [2024-07-14 01:20:25.480354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.104 [2024-07-14 01:20:25.480378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.104 [2024-07-14 01:20:25.480392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.104 [2024-07-14 01:20:25.480413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.104 [2024-07-14 01:20:25.480442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 01:20:25.490118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.104 [2024-07-14 01:20:25.490267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.104 [2024-07-14 01:20:25.490291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.104 [2024-07-14 01:20:25.490305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.104 [2024-07-14 01:20:25.490318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.104 [2024-07-14 01:20:25.490346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 01:20:25.500155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.104 [2024-07-14 01:20:25.500299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.104 [2024-07-14 01:20:25.500324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.104 [2024-07-14 01:20:25.500337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.104 [2024-07-14 01:20:25.500350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.104 [2024-07-14 01:20:25.500378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.104 [2024-07-14 01:20:25.510169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.104 [2024-07-14 01:20:25.510329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.104 [2024-07-14 01:20:25.510353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.104 [2024-07-14 01:20:25.510367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.104 [2024-07-14 01:20:25.510380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.104 [2024-07-14 01:20:25.510407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.104 qpair failed and we were unable to recover it. 00:34:36.364 [2024-07-14 01:20:25.520234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.364 [2024-07-14 01:20:25.520388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.364 [2024-07-14 01:20:25.520414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.364 [2024-07-14 01:20:25.520429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.364 [2024-07-14 01:20:25.520442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.364 [2024-07-14 01:20:25.520470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.364 qpair failed and we were unable to recover it. 00:34:36.364 [2024-07-14 01:20:25.530274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.364 [2024-07-14 01:20:25.530431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.364 [2024-07-14 01:20:25.530457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.364 [2024-07-14 01:20:25.530471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.364 [2024-07-14 01:20:25.530484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.364 [2024-07-14 01:20:25.530511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.364 qpair failed and we were unable to recover it. 00:34:36.364 [2024-07-14 01:20:25.540253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.365 [2024-07-14 01:20:25.540394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.365 [2024-07-14 01:20:25.540418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.365 [2024-07-14 01:20:25.540432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.365 [2024-07-14 01:20:25.540445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.365 [2024-07-14 01:20:25.540473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.365 qpair failed and we were unable to recover it. 00:34:36.365 [2024-07-14 01:20:25.550302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.365 [2024-07-14 01:20:25.550471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.365 [2024-07-14 01:20:25.550497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.365 [2024-07-14 01:20:25.550511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.365 [2024-07-14 01:20:25.550524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.365 [2024-07-14 01:20:25.550551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.365 qpair failed and we were unable to recover it. 00:34:36.365 [2024-07-14 01:20:25.560352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.365 [2024-07-14 01:20:25.560502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.365 [2024-07-14 01:20:25.560527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.365 [2024-07-14 01:20:25.560541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.365 [2024-07-14 01:20:25.560553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.365 [2024-07-14 01:20:25.560581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.365 qpair failed and we were unable to recover it. 00:34:36.365 [2024-07-14 01:20:25.570373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.365 [2024-07-14 01:20:25.570535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.365 [2024-07-14 01:20:25.570560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.365 [2024-07-14 01:20:25.570580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.365 [2024-07-14 01:20:25.570594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.365 [2024-07-14 01:20:25.570622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.365 qpair failed and we were unable to recover it. 00:34:36.365 [2024-07-14 01:20:25.580379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.365 [2024-07-14 01:20:25.580528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.365 [2024-07-14 01:20:25.580552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.365 [2024-07-14 01:20:25.580566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.365 [2024-07-14 01:20:25.580579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.365 [2024-07-14 01:20:25.580608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.365 qpair failed and we were unable to recover it. 00:34:36.365 [2024-07-14 01:20:25.590392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.365 [2024-07-14 01:20:25.590549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.365 [2024-07-14 01:20:25.590573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.365 [2024-07-14 01:20:25.590587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.365 [2024-07-14 01:20:25.590599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.365 [2024-07-14 01:20:25.590628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.365 qpair failed and we were unable to recover it. 00:34:36.365 [2024-07-14 01:20:25.600506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.365 [2024-07-14 01:20:25.600723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.365 [2024-07-14 01:20:25.600748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.365 [2024-07-14 01:20:25.600762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.365 [2024-07-14 01:20:25.600775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.365 [2024-07-14 01:20:25.600802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.365 qpair failed and we were unable to recover it. 00:34:36.365 [2024-07-14 01:20:25.610456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.365 [2024-07-14 01:20:25.610607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.365 [2024-07-14 01:20:25.610632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.365 [2024-07-14 01:20:25.610646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.365 [2024-07-14 01:20:25.610659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.365 [2024-07-14 01:20:25.610686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.365 qpair failed and we were unable to recover it. 00:34:36.365 [2024-07-14 01:20:25.620544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.365 [2024-07-14 01:20:25.620702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.365 [2024-07-14 01:20:25.620728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.365 [2024-07-14 01:20:25.620742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.365 [2024-07-14 01:20:25.620754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.365 [2024-07-14 01:20:25.620782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.365 qpair failed and we were unable to recover it. 00:34:36.365 [2024-07-14 01:20:25.630512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.365 [2024-07-14 01:20:25.630657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.365 [2024-07-14 01:20:25.630683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.365 [2024-07-14 01:20:25.630697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.365 [2024-07-14 01:20:25.630710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.365 [2024-07-14 01:20:25.630737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.365 qpair failed and we were unable to recover it. 00:34:36.365 [2024-07-14 01:20:25.640549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.365 [2024-07-14 01:20:25.640703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.365 [2024-07-14 01:20:25.640728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.365 [2024-07-14 01:20:25.640742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.365 [2024-07-14 01:20:25.640755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.365 [2024-07-14 01:20:25.640782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.365 qpair failed and we were unable to recover it. 00:34:36.365 [2024-07-14 01:20:25.650570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.365 [2024-07-14 01:20:25.650738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.365 [2024-07-14 01:20:25.650764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.365 [2024-07-14 01:20:25.650778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.365 [2024-07-14 01:20:25.650791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.365 [2024-07-14 01:20:25.650818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.365 qpair failed and we were unable to recover it. 00:34:36.365 [2024-07-14 01:20:25.660599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.365 [2024-07-14 01:20:25.660756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.365 [2024-07-14 01:20:25.660782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.365 [2024-07-14 01:20:25.660806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.365 [2024-07-14 01:20:25.660821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.365 [2024-07-14 01:20:25.660851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.365 qpair failed and we were unable to recover it. 00:34:36.365 [2024-07-14 01:20:25.670679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.365 [2024-07-14 01:20:25.670845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.365 [2024-07-14 01:20:25.670877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.365 [2024-07-14 01:20:25.670892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.365 [2024-07-14 01:20:25.670906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.365 [2024-07-14 01:20:25.670934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.365 qpair failed and we were unable to recover it. 00:34:36.365 [2024-07-14 01:20:25.680652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.365 [2024-07-14 01:20:25.680800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.365 [2024-07-14 01:20:25.680825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.366 [2024-07-14 01:20:25.680839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.366 [2024-07-14 01:20:25.680852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.366 [2024-07-14 01:20:25.680884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.366 qpair failed and we were unable to recover it. 00:34:36.366 [2024-07-14 01:20:25.690792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.366 [2024-07-14 01:20:25.690943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.366 [2024-07-14 01:20:25.690969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.366 [2024-07-14 01:20:25.690984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.366 [2024-07-14 01:20:25.690996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.366 [2024-07-14 01:20:25.691024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.366 qpair failed and we were unable to recover it. 00:34:36.366 [2024-07-14 01:20:25.700712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.366 [2024-07-14 01:20:25.700893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.366 [2024-07-14 01:20:25.700919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.366 [2024-07-14 01:20:25.700933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.366 [2024-07-14 01:20:25.700946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.366 [2024-07-14 01:20:25.700974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.366 qpair failed and we were unable to recover it. 00:34:36.366 [2024-07-14 01:20:25.710733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.366 [2024-07-14 01:20:25.710889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.366 [2024-07-14 01:20:25.710914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.366 [2024-07-14 01:20:25.710929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.366 [2024-07-14 01:20:25.710942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.366 [2024-07-14 01:20:25.710969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.366 qpair failed and we were unable to recover it. 00:34:36.366 [2024-07-14 01:20:25.720799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.366 [2024-07-14 01:20:25.720972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.366 [2024-07-14 01:20:25.720997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.366 [2024-07-14 01:20:25.721011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.366 [2024-07-14 01:20:25.721023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.366 [2024-07-14 01:20:25.721052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.366 qpair failed and we were unable to recover it. 00:34:36.366 [2024-07-14 01:20:25.730804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.366 [2024-07-14 01:20:25.730957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.366 [2024-07-14 01:20:25.730983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.366 [2024-07-14 01:20:25.730997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.366 [2024-07-14 01:20:25.731009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.366 [2024-07-14 01:20:25.731037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.366 qpair failed and we were unable to recover it. 00:34:36.366 [2024-07-14 01:20:25.740845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.366 [2024-07-14 01:20:25.740997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.366 [2024-07-14 01:20:25.741022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.366 [2024-07-14 01:20:25.741037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.366 [2024-07-14 01:20:25.741049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.366 [2024-07-14 01:20:25.741077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.366 qpair failed and we were unable to recover it. 00:34:36.366 [2024-07-14 01:20:25.750892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.366 [2024-07-14 01:20:25.751045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.366 [2024-07-14 01:20:25.751071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.366 [2024-07-14 01:20:25.751097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.366 [2024-07-14 01:20:25.751111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.366 [2024-07-14 01:20:25.751139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.366 qpair failed and we were unable to recover it. 00:34:36.366 [2024-07-14 01:20:25.760889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.366 [2024-07-14 01:20:25.761044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.366 [2024-07-14 01:20:25.761070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.366 [2024-07-14 01:20:25.761084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.366 [2024-07-14 01:20:25.761097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.366 [2024-07-14 01:20:25.761125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.366 qpair failed and we were unable to recover it. 00:34:36.366 [2024-07-14 01:20:25.770925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.366 [2024-07-14 01:20:25.771074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.366 [2024-07-14 01:20:25.771100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.366 [2024-07-14 01:20:25.771114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.366 [2024-07-14 01:20:25.771127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.366 [2024-07-14 01:20:25.771154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.366 qpair failed and we were unable to recover it. 00:34:36.625 [2024-07-14 01:20:25.780940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.625 [2024-07-14 01:20:25.781093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.625 [2024-07-14 01:20:25.781118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.625 [2024-07-14 01:20:25.781133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.625 [2024-07-14 01:20:25.781146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.625 [2024-07-14 01:20:25.781174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.625 qpair failed and we were unable to recover it. 00:34:36.625 [2024-07-14 01:20:25.790995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.625 [2024-07-14 01:20:25.791192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.625 [2024-07-14 01:20:25.791218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.625 [2024-07-14 01:20:25.791232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.625 [2024-07-14 01:20:25.791245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.625 [2024-07-14 01:20:25.791273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.625 qpair failed and we were unable to recover it. 00:34:36.625 [2024-07-14 01:20:25.801037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.625 [2024-07-14 01:20:25.801190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.625 [2024-07-14 01:20:25.801215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.625 [2024-07-14 01:20:25.801228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.625 [2024-07-14 01:20:25.801241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.625 [2024-07-14 01:20:25.801269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.625 qpair failed and we were unable to recover it. 00:34:36.625 [2024-07-14 01:20:25.811035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.625 [2024-07-14 01:20:25.811230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.625 [2024-07-14 01:20:25.811255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.625 [2024-07-14 01:20:25.811269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.625 [2024-07-14 01:20:25.811283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.625 [2024-07-14 01:20:25.811310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.625 qpair failed and we were unable to recover it. 00:34:36.625 [2024-07-14 01:20:25.821026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.625 [2024-07-14 01:20:25.821174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.625 [2024-07-14 01:20:25.821199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.626 [2024-07-14 01:20:25.821213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.626 [2024-07-14 01:20:25.821225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.626 [2024-07-14 01:20:25.821253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.626 qpair failed and we were unable to recover it. 00:34:36.626 [2024-07-14 01:20:25.831087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.626 [2024-07-14 01:20:25.831241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.626 [2024-07-14 01:20:25.831266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.626 [2024-07-14 01:20:25.831280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.626 [2024-07-14 01:20:25.831293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.626 [2024-07-14 01:20:25.831320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.626 qpair failed and we were unable to recover it. 00:34:36.626 [2024-07-14 01:20:25.841116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.626 [2024-07-14 01:20:25.841263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.626 [2024-07-14 01:20:25.841292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.626 [2024-07-14 01:20:25.841307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.626 [2024-07-14 01:20:25.841320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.626 [2024-07-14 01:20:25.841348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.626 qpair failed and we were unable to recover it. 00:34:36.626 [2024-07-14 01:20:25.851139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.626 [2024-07-14 01:20:25.851324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.626 [2024-07-14 01:20:25.851351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.626 [2024-07-14 01:20:25.851366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.626 [2024-07-14 01:20:25.851379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.626 [2024-07-14 01:20:25.851408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.626 qpair failed and we were unable to recover it. 00:34:36.626 [2024-07-14 01:20:25.861172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.626 [2024-07-14 01:20:25.861341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.626 [2024-07-14 01:20:25.861366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.626 [2024-07-14 01:20:25.861381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.626 [2024-07-14 01:20:25.861394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.626 [2024-07-14 01:20:25.861421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.626 qpair failed and we were unable to recover it. 00:34:36.626 [2024-07-14 01:20:25.871175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.626 [2024-07-14 01:20:25.871317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.626 [2024-07-14 01:20:25.871342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.626 [2024-07-14 01:20:25.871356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.626 [2024-07-14 01:20:25.871369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.626 [2024-07-14 01:20:25.871396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.626 qpair failed and we were unable to recover it. 00:34:36.626 [2024-07-14 01:20:25.881205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.626 [2024-07-14 01:20:25.881358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.626 [2024-07-14 01:20:25.881382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.626 [2024-07-14 01:20:25.881396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.626 [2024-07-14 01:20:25.881409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.626 [2024-07-14 01:20:25.881436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.626 qpair failed and we were unable to recover it. 00:34:36.626 [2024-07-14 01:20:25.891262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.626 [2024-07-14 01:20:25.891411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.626 [2024-07-14 01:20:25.891436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.626 [2024-07-14 01:20:25.891450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.626 [2024-07-14 01:20:25.891463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.626 [2024-07-14 01:20:25.891490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.626 qpair failed and we were unable to recover it. 00:34:36.626 [2024-07-14 01:20:25.901262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.626 [2024-07-14 01:20:25.901418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.626 [2024-07-14 01:20:25.901443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.626 [2024-07-14 01:20:25.901457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.626 [2024-07-14 01:20:25.901470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.626 [2024-07-14 01:20:25.901498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.626 qpair failed and we were unable to recover it. 00:34:36.626 [2024-07-14 01:20:25.911283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.626 [2024-07-14 01:20:25.911430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.626 [2024-07-14 01:20:25.911456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.626 [2024-07-14 01:20:25.911470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.626 [2024-07-14 01:20:25.911483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.626 [2024-07-14 01:20:25.911510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.626 qpair failed and we were unable to recover it. 00:34:36.626 [2024-07-14 01:20:25.921336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.626 [2024-07-14 01:20:25.921498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.626 [2024-07-14 01:20:25.921523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.626 [2024-07-14 01:20:25.921537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.626 [2024-07-14 01:20:25.921549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.626 [2024-07-14 01:20:25.921578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.626 qpair failed and we were unable to recover it. 00:34:36.626 [2024-07-14 01:20:25.931400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.626 [2024-07-14 01:20:25.931549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.626 [2024-07-14 01:20:25.931579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.626 [2024-07-14 01:20:25.931593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.626 [2024-07-14 01:20:25.931606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.626 [2024-07-14 01:20:25.931633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.626 qpair failed and we were unable to recover it. 00:34:36.626 [2024-07-14 01:20:25.941465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.626 [2024-07-14 01:20:25.941607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.626 [2024-07-14 01:20:25.941636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.626 [2024-07-14 01:20:25.941650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.626 [2024-07-14 01:20:25.941662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.626 [2024-07-14 01:20:25.941690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.626 qpair failed and we were unable to recover it. 00:34:36.626 [2024-07-14 01:20:25.951434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.626 [2024-07-14 01:20:25.951584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.626 [2024-07-14 01:20:25.951609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.626 [2024-07-14 01:20:25.951623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.626 [2024-07-14 01:20:25.951636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.626 [2024-07-14 01:20:25.951663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.626 qpair failed and we were unable to recover it. 00:34:36.626 [2024-07-14 01:20:25.961532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.626 [2024-07-14 01:20:25.961689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.626 [2024-07-14 01:20:25.961715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.626 [2024-07-14 01:20:25.961729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.626 [2024-07-14 01:20:25.961742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.627 [2024-07-14 01:20:25.961769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.627 qpair failed and we were unable to recover it. 00:34:36.627 [2024-07-14 01:20:25.971464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.627 [2024-07-14 01:20:25.971665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.627 [2024-07-14 01:20:25.971690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.627 [2024-07-14 01:20:25.971704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.627 [2024-07-14 01:20:25.971716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.627 [2024-07-14 01:20:25.971750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.627 qpair failed and we were unable to recover it. 00:34:36.627 [2024-07-14 01:20:25.981479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.627 [2024-07-14 01:20:25.981641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.627 [2024-07-14 01:20:25.981666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.627 [2024-07-14 01:20:25.981681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.627 [2024-07-14 01:20:25.981694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.627 [2024-07-14 01:20:25.981721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.627 qpair failed and we were unable to recover it. 00:34:36.627 [2024-07-14 01:20:25.991557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.627 [2024-07-14 01:20:25.991729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.627 [2024-07-14 01:20:25.991757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.627 [2024-07-14 01:20:25.991772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.627 [2024-07-14 01:20:25.991786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.627 [2024-07-14 01:20:25.991814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.627 qpair failed and we were unable to recover it. 00:34:36.627 [2024-07-14 01:20:26.001574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.627 [2024-07-14 01:20:26.001731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.627 [2024-07-14 01:20:26.001757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.627 [2024-07-14 01:20:26.001771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.627 [2024-07-14 01:20:26.001783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.627 [2024-07-14 01:20:26.001812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.627 qpair failed and we were unable to recover it. 00:34:36.627 [2024-07-14 01:20:26.011588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.627 [2024-07-14 01:20:26.011734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.627 [2024-07-14 01:20:26.011760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.627 [2024-07-14 01:20:26.011774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.627 [2024-07-14 01:20:26.011787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.627 [2024-07-14 01:20:26.011814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.627 qpair failed and we were unable to recover it. 00:34:36.627 [2024-07-14 01:20:26.021606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.627 [2024-07-14 01:20:26.021760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.627 [2024-07-14 01:20:26.021791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.627 [2024-07-14 01:20:26.021806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.627 [2024-07-14 01:20:26.021819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.627 [2024-07-14 01:20:26.021847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.627 qpair failed and we were unable to recover it. 00:34:36.627 [2024-07-14 01:20:26.031660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.627 [2024-07-14 01:20:26.031838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.627 [2024-07-14 01:20:26.031863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.627 [2024-07-14 01:20:26.031886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.627 [2024-07-14 01:20:26.031899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.627 [2024-07-14 01:20:26.031927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.627 qpair failed and we were unable to recover it. 00:34:36.891 [2024-07-14 01:20:26.041701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.891 [2024-07-14 01:20:26.041901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.891 [2024-07-14 01:20:26.041927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.891 [2024-07-14 01:20:26.041941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.891 [2024-07-14 01:20:26.041955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.891 [2024-07-14 01:20:26.041983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.891 qpair failed and we were unable to recover it. 00:34:36.891 [2024-07-14 01:20:26.051713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.891 [2024-07-14 01:20:26.051918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.891 [2024-07-14 01:20:26.051944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.891 [2024-07-14 01:20:26.051959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.891 [2024-07-14 01:20:26.051972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.891 [2024-07-14 01:20:26.051999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.891 qpair failed and we were unable to recover it. 00:34:36.891 [2024-07-14 01:20:26.061731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.891 [2024-07-14 01:20:26.061891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.891 [2024-07-14 01:20:26.061917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.891 [2024-07-14 01:20:26.061931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.891 [2024-07-14 01:20:26.061944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.891 [2024-07-14 01:20:26.061978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.891 qpair failed and we were unable to recover it. 00:34:36.891 [2024-07-14 01:20:26.071742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.891 [2024-07-14 01:20:26.071923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.891 [2024-07-14 01:20:26.071949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.891 [2024-07-14 01:20:26.071964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.891 [2024-07-14 01:20:26.071978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.891 [2024-07-14 01:20:26.072006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.891 qpair failed and we were unable to recover it. 00:34:36.891 [2024-07-14 01:20:26.081811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.891 [2024-07-14 01:20:26.081976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.891 [2024-07-14 01:20:26.082002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.891 [2024-07-14 01:20:26.082016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.891 [2024-07-14 01:20:26.082029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.891 [2024-07-14 01:20:26.082057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.891 qpair failed and we were unable to recover it. 00:34:36.891 [2024-07-14 01:20:26.091820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.891 [2024-07-14 01:20:26.091979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.891 [2024-07-14 01:20:26.092004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.891 [2024-07-14 01:20:26.092018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.891 [2024-07-14 01:20:26.092031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.891 [2024-07-14 01:20:26.092059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.891 qpair failed and we were unable to recover it. 00:34:36.891 [2024-07-14 01:20:26.101846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.891 [2024-07-14 01:20:26.102006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.891 [2024-07-14 01:20:26.102031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.891 [2024-07-14 01:20:26.102045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.891 [2024-07-14 01:20:26.102059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.891 [2024-07-14 01:20:26.102086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.891 qpair failed and we were unable to recover it. 00:34:36.891 [2024-07-14 01:20:26.111872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.891 [2024-07-14 01:20:26.112029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.891 [2024-07-14 01:20:26.112059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.891 [2024-07-14 01:20:26.112074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.891 [2024-07-14 01:20:26.112086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.891 [2024-07-14 01:20:26.112114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.891 qpair failed and we were unable to recover it. 00:34:36.891 [2024-07-14 01:20:26.121939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.891 [2024-07-14 01:20:26.122094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.891 [2024-07-14 01:20:26.122119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.891 [2024-07-14 01:20:26.122133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.891 [2024-07-14 01:20:26.122146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.891 [2024-07-14 01:20:26.122173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.891 qpair failed and we were unable to recover it. 00:34:36.891 [2024-07-14 01:20:26.131963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.891 [2024-07-14 01:20:26.132109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.891 [2024-07-14 01:20:26.132144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.891 [2024-07-14 01:20:26.132160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.891 [2024-07-14 01:20:26.132173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.891 [2024-07-14 01:20:26.132201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.891 qpair failed and we were unable to recover it. 00:34:36.891 [2024-07-14 01:20:26.141974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.891 [2024-07-14 01:20:26.142126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.891 [2024-07-14 01:20:26.142152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.891 [2024-07-14 01:20:26.142166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.891 [2024-07-14 01:20:26.142179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.891 [2024-07-14 01:20:26.142206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.891 qpair failed and we were unable to recover it. 00:34:36.891 [2024-07-14 01:20:26.152005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.891 [2024-07-14 01:20:26.152172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.892 [2024-07-14 01:20:26.152197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.892 [2024-07-14 01:20:26.152211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.892 [2024-07-14 01:20:26.152229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.892 [2024-07-14 01:20:26.152258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.892 qpair failed and we were unable to recover it. 00:34:36.892 [2024-07-14 01:20:26.162034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.892 [2024-07-14 01:20:26.162183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.892 [2024-07-14 01:20:26.162209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.892 [2024-07-14 01:20:26.162223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.892 [2024-07-14 01:20:26.162236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.892 [2024-07-14 01:20:26.162264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.892 qpair failed and we were unable to recover it. 00:34:36.892 [2024-07-14 01:20:26.172145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.892 [2024-07-14 01:20:26.172295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.892 [2024-07-14 01:20:26.172320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.892 [2024-07-14 01:20:26.172334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.892 [2024-07-14 01:20:26.172347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.892 [2024-07-14 01:20:26.172374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.892 qpair failed and we were unable to recover it. 00:34:36.892 [2024-07-14 01:20:26.182071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.892 [2024-07-14 01:20:26.182221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.892 [2024-07-14 01:20:26.182246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.892 [2024-07-14 01:20:26.182260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.892 [2024-07-14 01:20:26.182272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.892 [2024-07-14 01:20:26.182300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.892 qpair failed and we were unable to recover it. 00:34:36.892 [2024-07-14 01:20:26.192125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.892 [2024-07-14 01:20:26.192275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.892 [2024-07-14 01:20:26.192300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.892 [2024-07-14 01:20:26.192314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.892 [2024-07-14 01:20:26.192327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.892 [2024-07-14 01:20:26.192354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.892 qpair failed and we were unable to recover it. 00:34:36.892 [2024-07-14 01:20:26.202156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.892 [2024-07-14 01:20:26.202357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.892 [2024-07-14 01:20:26.202382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.892 [2024-07-14 01:20:26.202397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.892 [2024-07-14 01:20:26.202410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.892 [2024-07-14 01:20:26.202436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.892 qpair failed and we were unable to recover it. 00:34:36.892 [2024-07-14 01:20:26.212192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.892 [2024-07-14 01:20:26.212340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.892 [2024-07-14 01:20:26.212366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.892 [2024-07-14 01:20:26.212380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.892 [2024-07-14 01:20:26.212392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.892 [2024-07-14 01:20:26.212419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.892 qpair failed and we were unable to recover it. 00:34:36.892 [2024-07-14 01:20:26.222246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.892 [2024-07-14 01:20:26.222395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.892 [2024-07-14 01:20:26.222420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.892 [2024-07-14 01:20:26.222434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.892 [2024-07-14 01:20:26.222446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.892 [2024-07-14 01:20:26.222475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.892 qpair failed and we were unable to recover it. 00:34:36.892 [2024-07-14 01:20:26.232225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.892 [2024-07-14 01:20:26.232366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.892 [2024-07-14 01:20:26.232391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.892 [2024-07-14 01:20:26.232405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.892 [2024-07-14 01:20:26.232417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.892 [2024-07-14 01:20:26.232444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.892 qpair failed and we were unable to recover it. 00:34:36.892 [2024-07-14 01:20:26.242286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.892 [2024-07-14 01:20:26.242455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.892 [2024-07-14 01:20:26.242479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.892 [2024-07-14 01:20:26.242492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.892 [2024-07-14 01:20:26.242512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.892 [2024-07-14 01:20:26.242540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.892 qpair failed and we were unable to recover it. 00:34:36.892 [2024-07-14 01:20:26.252285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.892 [2024-07-14 01:20:26.252443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.892 [2024-07-14 01:20:26.252468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.892 [2024-07-14 01:20:26.252482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.892 [2024-07-14 01:20:26.252495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.892 [2024-07-14 01:20:26.252523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.892 qpair failed and we were unable to recover it. 00:34:36.892 [2024-07-14 01:20:26.262379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.892 [2024-07-14 01:20:26.262525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.892 [2024-07-14 01:20:26.262551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.892 [2024-07-14 01:20:26.262565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.892 [2024-07-14 01:20:26.262578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.892 [2024-07-14 01:20:26.262606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.892 qpair failed and we were unable to recover it. 00:34:36.892 [2024-07-14 01:20:26.272327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.892 [2024-07-14 01:20:26.272476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.892 [2024-07-14 01:20:26.272502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.892 [2024-07-14 01:20:26.272516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.892 [2024-07-14 01:20:26.272529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.892 [2024-07-14 01:20:26.272556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.892 qpair failed and we were unable to recover it. 00:34:36.892 [2024-07-14 01:20:26.282367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.892 [2024-07-14 01:20:26.282517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.893 [2024-07-14 01:20:26.282541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.893 [2024-07-14 01:20:26.282555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.893 [2024-07-14 01:20:26.282568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.893 [2024-07-14 01:20:26.282595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.893 qpair failed and we were unable to recover it. 00:34:36.893 [2024-07-14 01:20:26.292422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.893 [2024-07-14 01:20:26.292572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.893 [2024-07-14 01:20:26.292597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.893 [2024-07-14 01:20:26.292611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.893 [2024-07-14 01:20:26.292624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.893 [2024-07-14 01:20:26.292651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.893 qpair failed and we were unable to recover it. 00:34:36.893 [2024-07-14 01:20:26.302470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.893 [2024-07-14 01:20:26.302617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.893 [2024-07-14 01:20:26.302650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.893 [2024-07-14 01:20:26.302677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.893 [2024-07-14 01:20:26.302694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:36.893 [2024-07-14 01:20:26.302723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.893 qpair failed and we were unable to recover it. 00:34:37.151 [2024-07-14 01:20:26.312506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.151 [2024-07-14 01:20:26.312658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.151 [2024-07-14 01:20:26.312686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.151 [2024-07-14 01:20:26.312700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.151 [2024-07-14 01:20:26.312714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.151 [2024-07-14 01:20:26.312742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.151 qpair failed and we were unable to recover it. 00:34:37.151 [2024-07-14 01:20:26.322545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.151 [2024-07-14 01:20:26.322741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.152 [2024-07-14 01:20:26.322766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.152 [2024-07-14 01:20:26.322780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.152 [2024-07-14 01:20:26.322793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.152 [2024-07-14 01:20:26.322821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.152 qpair failed and we were unable to recover it. 00:34:37.152 [2024-07-14 01:20:26.332612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.152 [2024-07-14 01:20:26.332798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.152 [2024-07-14 01:20:26.332823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.152 [2024-07-14 01:20:26.332837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.152 [2024-07-14 01:20:26.332855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.152 [2024-07-14 01:20:26.332893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.152 qpair failed and we were unable to recover it. 00:34:37.152 [2024-07-14 01:20:26.342532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.152 [2024-07-14 01:20:26.342671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.152 [2024-07-14 01:20:26.342696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.152 [2024-07-14 01:20:26.342710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.152 [2024-07-14 01:20:26.342723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.152 [2024-07-14 01:20:26.342750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.152 qpair failed and we were unable to recover it. 00:34:37.152 [2024-07-14 01:20:26.352564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.152 [2024-07-14 01:20:26.352713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.152 [2024-07-14 01:20:26.352738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.152 [2024-07-14 01:20:26.352752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.152 [2024-07-14 01:20:26.352765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.152 [2024-07-14 01:20:26.352792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.152 qpair failed and we were unable to recover it. 00:34:37.152 [2024-07-14 01:20:26.362653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.152 [2024-07-14 01:20:26.362806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.152 [2024-07-14 01:20:26.362830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.152 [2024-07-14 01:20:26.362844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.152 [2024-07-14 01:20:26.362857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.152 [2024-07-14 01:20:26.362895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.152 qpair failed and we were unable to recover it. 00:34:37.152 [2024-07-14 01:20:26.372649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.152 [2024-07-14 01:20:26.372799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.152 [2024-07-14 01:20:26.372824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.152 [2024-07-14 01:20:26.372838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.152 [2024-07-14 01:20:26.372851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.152 [2024-07-14 01:20:26.372886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.152 qpair failed and we were unable to recover it. 00:34:37.152 [2024-07-14 01:20:26.382651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.152 [2024-07-14 01:20:26.382816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.152 [2024-07-14 01:20:26.382841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.152 [2024-07-14 01:20:26.382856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.152 [2024-07-14 01:20:26.382875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.152 [2024-07-14 01:20:26.382904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.152 qpair failed and we were unable to recover it. 00:34:37.152 [2024-07-14 01:20:26.392683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.152 [2024-07-14 01:20:26.392830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.152 [2024-07-14 01:20:26.392855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.152 [2024-07-14 01:20:26.392879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.152 [2024-07-14 01:20:26.392895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.152 [2024-07-14 01:20:26.392922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.152 qpair failed and we were unable to recover it. 00:34:37.152 [2024-07-14 01:20:26.402730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.152 [2024-07-14 01:20:26.402924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.152 [2024-07-14 01:20:26.402949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.152 [2024-07-14 01:20:26.402963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.152 [2024-07-14 01:20:26.402975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.152 [2024-07-14 01:20:26.403003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.152 qpair failed and we were unable to recover it. 00:34:37.152 [2024-07-14 01:20:26.412734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.152 [2024-07-14 01:20:26.412903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.152 [2024-07-14 01:20:26.412928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.152 [2024-07-14 01:20:26.412942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.152 [2024-07-14 01:20:26.412954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.152 [2024-07-14 01:20:26.412982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.152 qpair failed and we were unable to recover it. 00:34:37.152 [2024-07-14 01:20:26.422778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.152 [2024-07-14 01:20:26.422932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.152 [2024-07-14 01:20:26.422957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.152 [2024-07-14 01:20:26.422977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.152 [2024-07-14 01:20:26.422990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.152 [2024-07-14 01:20:26.423018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.152 qpair failed and we were unable to recover it. 00:34:37.152 [2024-07-14 01:20:26.432821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.152 [2024-07-14 01:20:26.432980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.152 [2024-07-14 01:20:26.433005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.152 [2024-07-14 01:20:26.433019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.152 [2024-07-14 01:20:26.433033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.152 [2024-07-14 01:20:26.433060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.152 qpair failed and we were unable to recover it. 00:34:37.152 [2024-07-14 01:20:26.442863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.152 [2024-07-14 01:20:26.443027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.152 [2024-07-14 01:20:26.443052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.152 [2024-07-14 01:20:26.443066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.152 [2024-07-14 01:20:26.443079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.152 [2024-07-14 01:20:26.443107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.152 qpair failed and we were unable to recover it. 00:34:37.152 [2024-07-14 01:20:26.452843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.152 [2024-07-14 01:20:26.453043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.152 [2024-07-14 01:20:26.453068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.152 [2024-07-14 01:20:26.453082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.152 [2024-07-14 01:20:26.453094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.152 [2024-07-14 01:20:26.453124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.152 qpair failed and we were unable to recover it. 00:34:37.152 [2024-07-14 01:20:26.462912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.152 [2024-07-14 01:20:26.463058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.152 [2024-07-14 01:20:26.463083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.152 [2024-07-14 01:20:26.463097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.153 [2024-07-14 01:20:26.463110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.153 [2024-07-14 01:20:26.463138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.153 qpair failed and we were unable to recover it. 00:34:37.153 [2024-07-14 01:20:26.472935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.153 [2024-07-14 01:20:26.473132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.153 [2024-07-14 01:20:26.473157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.153 [2024-07-14 01:20:26.473171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.153 [2024-07-14 01:20:26.473184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.153 [2024-07-14 01:20:26.473211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.153 qpair failed and we were unable to recover it. 00:34:37.153 [2024-07-14 01:20:26.482964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.153 [2024-07-14 01:20:26.483119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.153 [2024-07-14 01:20:26.483144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.153 [2024-07-14 01:20:26.483159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.153 [2024-07-14 01:20:26.483172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.153 [2024-07-14 01:20:26.483199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.153 qpair failed and we were unable to recover it. 00:34:37.153 [2024-07-14 01:20:26.492961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.153 [2024-07-14 01:20:26.493113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.153 [2024-07-14 01:20:26.493138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.153 [2024-07-14 01:20:26.493152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.153 [2024-07-14 01:20:26.493165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.153 [2024-07-14 01:20:26.493192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.153 qpair failed and we were unable to recover it. 00:34:37.153 [2024-07-14 01:20:26.502995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.153 [2024-07-14 01:20:26.503143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.153 [2024-07-14 01:20:26.503168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.153 [2024-07-14 01:20:26.503182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.153 [2024-07-14 01:20:26.503195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.153 [2024-07-14 01:20:26.503222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.153 qpair failed and we were unable to recover it. 00:34:37.153 [2024-07-14 01:20:26.513025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.153 [2024-07-14 01:20:26.513166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.153 [2024-07-14 01:20:26.513191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.153 [2024-07-14 01:20:26.513211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.153 [2024-07-14 01:20:26.513224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.153 [2024-07-14 01:20:26.513252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.153 qpair failed and we were unable to recover it. 00:34:37.153 [2024-07-14 01:20:26.523077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.153 [2024-07-14 01:20:26.523231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.153 [2024-07-14 01:20:26.523257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.153 [2024-07-14 01:20:26.523271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.153 [2024-07-14 01:20:26.523283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.153 [2024-07-14 01:20:26.523311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.153 qpair failed and we were unable to recover it. 00:34:37.153 [2024-07-14 01:20:26.533080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.153 [2024-07-14 01:20:26.533232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.153 [2024-07-14 01:20:26.533257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.153 [2024-07-14 01:20:26.533272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.153 [2024-07-14 01:20:26.533285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.153 [2024-07-14 01:20:26.533312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.153 qpair failed and we were unable to recover it. 00:34:37.153 [2024-07-14 01:20:26.543152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.153 [2024-07-14 01:20:26.543343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.153 [2024-07-14 01:20:26.543369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.153 [2024-07-14 01:20:26.543383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.153 [2024-07-14 01:20:26.543396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.153 [2024-07-14 01:20:26.543423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.153 qpair failed and we were unable to recover it. 00:34:37.153 [2024-07-14 01:20:26.553138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.153 [2024-07-14 01:20:26.553281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.153 [2024-07-14 01:20:26.553306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.153 [2024-07-14 01:20:26.553320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.153 [2024-07-14 01:20:26.553333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.153 [2024-07-14 01:20:26.553360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.153 qpair failed and we were unable to recover it. 00:34:37.153 [2024-07-14 01:20:26.563197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.153 [2024-07-14 01:20:26.563365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.153 [2024-07-14 01:20:26.563391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.153 [2024-07-14 01:20:26.563405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.153 [2024-07-14 01:20:26.563418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.153 [2024-07-14 01:20:26.563446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.153 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-14 01:20:26.573207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.412 [2024-07-14 01:20:26.573374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.412 [2024-07-14 01:20:26.573400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.412 [2024-07-14 01:20:26.573414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.412 [2024-07-14 01:20:26.573427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.412 [2024-07-14 01:20:26.573454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-14 01:20:26.583219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.412 [2024-07-14 01:20:26.583365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.412 [2024-07-14 01:20:26.583390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.412 [2024-07-14 01:20:26.583404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.412 [2024-07-14 01:20:26.583417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.412 [2024-07-14 01:20:26.583444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-14 01:20:26.593273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.412 [2024-07-14 01:20:26.593416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.412 [2024-07-14 01:20:26.593441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.412 [2024-07-14 01:20:26.593455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.412 [2024-07-14 01:20:26.593468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.412 [2024-07-14 01:20:26.593496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-14 01:20:26.603347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.412 [2024-07-14 01:20:26.603503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.412 [2024-07-14 01:20:26.603533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.412 [2024-07-14 01:20:26.603547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.412 [2024-07-14 01:20:26.603560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.412 [2024-07-14 01:20:26.603588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.412 [2024-07-14 01:20:26.613333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.412 [2024-07-14 01:20:26.613487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.412 [2024-07-14 01:20:26.613512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.412 [2024-07-14 01:20:26.613526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.412 [2024-07-14 01:20:26.613539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.412 [2024-07-14 01:20:26.613566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.412 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-14 01:20:26.623344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.413 [2024-07-14 01:20:26.623488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.413 [2024-07-14 01:20:26.623512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.413 [2024-07-14 01:20:26.623526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.413 [2024-07-14 01:20:26.623539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.413 [2024-07-14 01:20:26.623567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-14 01:20:26.633354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.413 [2024-07-14 01:20:26.633500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.413 [2024-07-14 01:20:26.633525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.413 [2024-07-14 01:20:26.633539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.413 [2024-07-14 01:20:26.633552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.413 [2024-07-14 01:20:26.633579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-14 01:20:26.643414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.413 [2024-07-14 01:20:26.643617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.413 [2024-07-14 01:20:26.643641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.413 [2024-07-14 01:20:26.643655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.413 [2024-07-14 01:20:26.643668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.413 [2024-07-14 01:20:26.643695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-14 01:20:26.653483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.413 [2024-07-14 01:20:26.653637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.413 [2024-07-14 01:20:26.653662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.413 [2024-07-14 01:20:26.653677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.413 [2024-07-14 01:20:26.653689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.413 [2024-07-14 01:20:26.653717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-14 01:20:26.663465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.413 [2024-07-14 01:20:26.663638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.413 [2024-07-14 01:20:26.663664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.413 [2024-07-14 01:20:26.663678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.413 [2024-07-14 01:20:26.663690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.413 [2024-07-14 01:20:26.663717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-14 01:20:26.673477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.413 [2024-07-14 01:20:26.673629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.413 [2024-07-14 01:20:26.673655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.413 [2024-07-14 01:20:26.673669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.413 [2024-07-14 01:20:26.673682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.413 [2024-07-14 01:20:26.673710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-14 01:20:26.683531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.413 [2024-07-14 01:20:26.683679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.413 [2024-07-14 01:20:26.683704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.413 [2024-07-14 01:20:26.683718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.413 [2024-07-14 01:20:26.683730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.413 [2024-07-14 01:20:26.683758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-14 01:20:26.693625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.413 [2024-07-14 01:20:26.693775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.413 [2024-07-14 01:20:26.693805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.413 [2024-07-14 01:20:26.693820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.413 [2024-07-14 01:20:26.693833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.413 [2024-07-14 01:20:26.693860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-14 01:20:26.703552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.413 [2024-07-14 01:20:26.703693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.413 [2024-07-14 01:20:26.703719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.413 [2024-07-14 01:20:26.703732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.413 [2024-07-14 01:20:26.703746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.413 [2024-07-14 01:20:26.703773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-14 01:20:26.713567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.413 [2024-07-14 01:20:26.713723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.413 [2024-07-14 01:20:26.713747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.413 [2024-07-14 01:20:26.713761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.413 [2024-07-14 01:20:26.713774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.413 [2024-07-14 01:20:26.713801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-14 01:20:26.723633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.413 [2024-07-14 01:20:26.723838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.413 [2024-07-14 01:20:26.723863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.413 [2024-07-14 01:20:26.723884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.413 [2024-07-14 01:20:26.723899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.413 [2024-07-14 01:20:26.723927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-14 01:20:26.733625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.413 [2024-07-14 01:20:26.733768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.413 [2024-07-14 01:20:26.733793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.413 [2024-07-14 01:20:26.733807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.413 [2024-07-14 01:20:26.733820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.413 [2024-07-14 01:20:26.733852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-14 01:20:26.743684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.413 [2024-07-14 01:20:26.743828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.413 [2024-07-14 01:20:26.743853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.413 [2024-07-14 01:20:26.743873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.413 [2024-07-14 01:20:26.743886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.413 [2024-07-14 01:20:26.743916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-14 01:20:26.753727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.413 [2024-07-14 01:20:26.753893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.413 [2024-07-14 01:20:26.753918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.413 [2024-07-14 01:20:26.753931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.413 [2024-07-14 01:20:26.753944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.413 [2024-07-14 01:20:26.753971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.413 qpair failed and we were unable to recover it. 00:34:37.413 [2024-07-14 01:20:26.763731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.413 [2024-07-14 01:20:26.763891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.414 [2024-07-14 01:20:26.763925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.414 [2024-07-14 01:20:26.763940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.414 [2024-07-14 01:20:26.763954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.414 [2024-07-14 01:20:26.763983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-14 01:20:26.773775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.414 [2024-07-14 01:20:26.773976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.414 [2024-07-14 01:20:26.774002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.414 [2024-07-14 01:20:26.774016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.414 [2024-07-14 01:20:26.774030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.414 [2024-07-14 01:20:26.774057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-14 01:20:26.783798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.414 [2024-07-14 01:20:26.783950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.414 [2024-07-14 01:20:26.783980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.414 [2024-07-14 01:20:26.783995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.414 [2024-07-14 01:20:26.784008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.414 [2024-07-14 01:20:26.784035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-14 01:20:26.793801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.414 [2024-07-14 01:20:26.793955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.414 [2024-07-14 01:20:26.793980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.414 [2024-07-14 01:20:26.793995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.414 [2024-07-14 01:20:26.794007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.414 [2024-07-14 01:20:26.794034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-14 01:20:26.803842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.414 [2024-07-14 01:20:26.804001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.414 [2024-07-14 01:20:26.804026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.414 [2024-07-14 01:20:26.804040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.414 [2024-07-14 01:20:26.804053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.414 [2024-07-14 01:20:26.804081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-14 01:20:26.813882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.414 [2024-07-14 01:20:26.814033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.414 [2024-07-14 01:20:26.814058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.414 [2024-07-14 01:20:26.814072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.414 [2024-07-14 01:20:26.814084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.414 [2024-07-14 01:20:26.814112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.414 [2024-07-14 01:20:26.823900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.414 [2024-07-14 01:20:26.824045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.414 [2024-07-14 01:20:26.824072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.414 [2024-07-14 01:20:26.824087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.414 [2024-07-14 01:20:26.824100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.414 [2024-07-14 01:20:26.824136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.414 qpair failed and we were unable to recover it. 00:34:37.673 [2024-07-14 01:20:26.833910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.673 [2024-07-14 01:20:26.834061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.673 [2024-07-14 01:20:26.834087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.673 [2024-07-14 01:20:26.834101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.673 [2024-07-14 01:20:26.834114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.673 [2024-07-14 01:20:26.834143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.673 qpair failed and we were unable to recover it. 00:34:37.673 [2024-07-14 01:20:26.843941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.673 [2024-07-14 01:20:26.844100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.673 [2024-07-14 01:20:26.844125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.673 [2024-07-14 01:20:26.844140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.673 [2024-07-14 01:20:26.844153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.673 [2024-07-14 01:20:26.844181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.673 qpair failed and we were unable to recover it. 00:34:37.673 [2024-07-14 01:20:26.853958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.673 [2024-07-14 01:20:26.854104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.673 [2024-07-14 01:20:26.854129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.673 [2024-07-14 01:20:26.854143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.673 [2024-07-14 01:20:26.854156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.673 [2024-07-14 01:20:26.854183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.673 qpair failed and we were unable to recover it. 00:34:37.673 [2024-07-14 01:20:26.864033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.673 [2024-07-14 01:20:26.864179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.673 [2024-07-14 01:20:26.864204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.673 [2024-07-14 01:20:26.864217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.673 [2024-07-14 01:20:26.864230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.673 [2024-07-14 01:20:26.864257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.673 qpair failed and we were unable to recover it. 00:34:37.673 [2024-07-14 01:20:26.874074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.673 [2024-07-14 01:20:26.874223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.673 [2024-07-14 01:20:26.874254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.673 [2024-07-14 01:20:26.874269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.673 [2024-07-14 01:20:26.874281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.673 [2024-07-14 01:20:26.874308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.673 qpair failed and we were unable to recover it. 00:34:37.673 [2024-07-14 01:20:26.884068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.673 [2024-07-14 01:20:26.884219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.673 [2024-07-14 01:20:26.884243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.673 [2024-07-14 01:20:26.884257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.673 [2024-07-14 01:20:26.884270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.673 [2024-07-14 01:20:26.884298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.673 qpair failed and we were unable to recover it. 00:34:37.673 [2024-07-14 01:20:26.894074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.673 [2024-07-14 01:20:26.894219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.673 [2024-07-14 01:20:26.894244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.673 [2024-07-14 01:20:26.894258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.673 [2024-07-14 01:20:26.894271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.673 [2024-07-14 01:20:26.894298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.673 qpair failed and we were unable to recover it. 00:34:37.673 [2024-07-14 01:20:26.904133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.673 [2024-07-14 01:20:26.904310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.673 [2024-07-14 01:20:26.904335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.673 [2024-07-14 01:20:26.904349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.673 [2024-07-14 01:20:26.904362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.673 [2024-07-14 01:20:26.904389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.673 qpair failed and we were unable to recover it. 00:34:37.673 [2024-07-14 01:20:26.914158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.673 [2024-07-14 01:20:26.914304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.673 [2024-07-14 01:20:26.914329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.673 [2024-07-14 01:20:26.914343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.673 [2024-07-14 01:20:26.914361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.673 [2024-07-14 01:20:26.914389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.673 qpair failed and we were unable to recover it. 00:34:37.673 [2024-07-14 01:20:26.924206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.673 [2024-07-14 01:20:26.924354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.673 [2024-07-14 01:20:26.924379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.673 [2024-07-14 01:20:26.924393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.673 [2024-07-14 01:20:26.924405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.673 [2024-07-14 01:20:26.924433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.673 qpair failed and we were unable to recover it. 00:34:37.673 [2024-07-14 01:20:26.934205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.673 [2024-07-14 01:20:26.934353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.673 [2024-07-14 01:20:26.934378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.673 [2024-07-14 01:20:26.934392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.673 [2024-07-14 01:20:26.934404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.673 [2024-07-14 01:20:26.934432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.673 qpair failed and we were unable to recover it. 00:34:37.673 [2024-07-14 01:20:26.944261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.673 [2024-07-14 01:20:26.944414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.673 [2024-07-14 01:20:26.944438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.674 [2024-07-14 01:20:26.944452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.674 [2024-07-14 01:20:26.944465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.674 [2024-07-14 01:20:26.944492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.674 qpair failed and we were unable to recover it. 00:34:37.674 [2024-07-14 01:20:26.954273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.674 [2024-07-14 01:20:26.954416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.674 [2024-07-14 01:20:26.954442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.674 [2024-07-14 01:20:26.954456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.674 [2024-07-14 01:20:26.954469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.674 [2024-07-14 01:20:26.954495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.674 qpair failed and we were unable to recover it. 00:34:37.674 [2024-07-14 01:20:26.964285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.674 [2024-07-14 01:20:26.964439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.674 [2024-07-14 01:20:26.964464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.674 [2024-07-14 01:20:26.964478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.674 [2024-07-14 01:20:26.964491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.674 [2024-07-14 01:20:26.964518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.674 qpair failed and we were unable to recover it. 00:34:37.674 [2024-07-14 01:20:26.974322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.674 [2024-07-14 01:20:26.974471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.674 [2024-07-14 01:20:26.974497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.674 [2024-07-14 01:20:26.974511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.674 [2024-07-14 01:20:26.974524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.674 [2024-07-14 01:20:26.974551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.674 qpair failed and we were unable to recover it. 00:34:37.674 [2024-07-14 01:20:26.984349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.674 [2024-07-14 01:20:26.984501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.674 [2024-07-14 01:20:26.984525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.674 [2024-07-14 01:20:26.984539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.674 [2024-07-14 01:20:26.984552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.674 [2024-07-14 01:20:26.984579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.674 qpair failed and we were unable to recover it. 00:34:37.674 [2024-07-14 01:20:26.994397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.674 [2024-07-14 01:20:26.994549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.674 [2024-07-14 01:20:26.994575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.674 [2024-07-14 01:20:26.994589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.674 [2024-07-14 01:20:26.994602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.674 [2024-07-14 01:20:26.994631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.674 qpair failed and we were unable to recover it. 00:34:37.674 [2024-07-14 01:20:27.004438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.674 [2024-07-14 01:20:27.004617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.674 [2024-07-14 01:20:27.004642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.674 [2024-07-14 01:20:27.004656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.674 [2024-07-14 01:20:27.004677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.674 [2024-07-14 01:20:27.004705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.674 qpair failed and we were unable to recover it. 00:34:37.674 [2024-07-14 01:20:27.014414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.674 [2024-07-14 01:20:27.014572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.674 [2024-07-14 01:20:27.014597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.674 [2024-07-14 01:20:27.014611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.674 [2024-07-14 01:20:27.014624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.674 [2024-07-14 01:20:27.014651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.674 qpair failed and we were unable to recover it. 00:34:37.674 [2024-07-14 01:20:27.024462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.674 [2024-07-14 01:20:27.024608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.674 [2024-07-14 01:20:27.024633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.674 [2024-07-14 01:20:27.024647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.674 [2024-07-14 01:20:27.024659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.674 [2024-07-14 01:20:27.024687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.674 qpair failed and we were unable to recover it. 00:34:37.674 [2024-07-14 01:20:27.034530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.674 [2024-07-14 01:20:27.034689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.674 [2024-07-14 01:20:27.034714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.674 [2024-07-14 01:20:27.034728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.674 [2024-07-14 01:20:27.034742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.674 [2024-07-14 01:20:27.034769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.674 qpair failed and we were unable to recover it. 00:34:37.674 [2024-07-14 01:20:27.044526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.674 [2024-07-14 01:20:27.044700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.674 [2024-07-14 01:20:27.044725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.674 [2024-07-14 01:20:27.044739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.674 [2024-07-14 01:20:27.044752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.674 [2024-07-14 01:20:27.044780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.674 qpair failed and we were unable to recover it. 00:34:37.674 [2024-07-14 01:20:27.054541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.674 [2024-07-14 01:20:27.054697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.674 [2024-07-14 01:20:27.054722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.674 [2024-07-14 01:20:27.054736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.674 [2024-07-14 01:20:27.054749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.674 [2024-07-14 01:20:27.054776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.674 qpair failed and we were unable to recover it. 00:34:37.674 [2024-07-14 01:20:27.064542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.674 [2024-07-14 01:20:27.064688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.674 [2024-07-14 01:20:27.064714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.674 [2024-07-14 01:20:27.064728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.674 [2024-07-14 01:20:27.064741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.674 [2024-07-14 01:20:27.064768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.674 qpair failed and we were unable to recover it. 00:34:37.674 [2024-07-14 01:20:27.074566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.674 [2024-07-14 01:20:27.074727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.674 [2024-07-14 01:20:27.074753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.674 [2024-07-14 01:20:27.074767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.674 [2024-07-14 01:20:27.074780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.674 [2024-07-14 01:20:27.074807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.674 qpair failed and we were unable to recover it. 00:34:37.674 [2024-07-14 01:20:27.084632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.674 [2024-07-14 01:20:27.084790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.674 [2024-07-14 01:20:27.084817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.674 [2024-07-14 01:20:27.084831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.674 [2024-07-14 01:20:27.084844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.674 [2024-07-14 01:20:27.084882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.674 qpair failed and we were unable to recover it. 00:34:37.933 [2024-07-14 01:20:27.094660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.933 [2024-07-14 01:20:27.094812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.933 [2024-07-14 01:20:27.094839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.933 [2024-07-14 01:20:27.094853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.933 [2024-07-14 01:20:27.094884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.933 [2024-07-14 01:20:27.094915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.933 qpair failed and we were unable to recover it. 00:34:37.933 [2024-07-14 01:20:27.104674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.933 [2024-07-14 01:20:27.104818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.933 [2024-07-14 01:20:27.104843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.933 [2024-07-14 01:20:27.104857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.933 [2024-07-14 01:20:27.104877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.933 [2024-07-14 01:20:27.104905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.933 qpair failed and we were unable to recover it. 00:34:37.933 [2024-07-14 01:20:27.114698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.933 [2024-07-14 01:20:27.114900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.933 [2024-07-14 01:20:27.114927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.933 [2024-07-14 01:20:27.114941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.933 [2024-07-14 01:20:27.114953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.933 [2024-07-14 01:20:27.114980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.933 qpair failed and we were unable to recover it. 00:34:37.933 [2024-07-14 01:20:27.124736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.933 [2024-07-14 01:20:27.124891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.933 [2024-07-14 01:20:27.124917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.933 [2024-07-14 01:20:27.124930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.933 [2024-07-14 01:20:27.124943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.933 [2024-07-14 01:20:27.124971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.934 qpair failed and we were unable to recover it. 00:34:37.934 [2024-07-14 01:20:27.134748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.934 [2024-07-14 01:20:27.134902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.934 [2024-07-14 01:20:27.134928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.934 [2024-07-14 01:20:27.134942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.934 [2024-07-14 01:20:27.134955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.934 [2024-07-14 01:20:27.134982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.934 qpair failed and we were unable to recover it. 00:34:37.934 [2024-07-14 01:20:27.144811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.934 [2024-07-14 01:20:27.145010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.934 [2024-07-14 01:20:27.145036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.934 [2024-07-14 01:20:27.145050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.934 [2024-07-14 01:20:27.145063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.934 [2024-07-14 01:20:27.145090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.934 qpair failed and we were unable to recover it. 00:34:37.934 [2024-07-14 01:20:27.154811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.934 [2024-07-14 01:20:27.154978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.934 [2024-07-14 01:20:27.155002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.934 [2024-07-14 01:20:27.155016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.934 [2024-07-14 01:20:27.155029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.934 [2024-07-14 01:20:27.155057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.934 qpair failed and we were unable to recover it. 00:34:37.934 [2024-07-14 01:20:27.164890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.934 [2024-07-14 01:20:27.165090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.934 [2024-07-14 01:20:27.165116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.934 [2024-07-14 01:20:27.165135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.934 [2024-07-14 01:20:27.165149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.934 [2024-07-14 01:20:27.165178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.934 qpair failed and we were unable to recover it. 00:34:37.934 [2024-07-14 01:20:27.174918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.934 [2024-07-14 01:20:27.175118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.934 [2024-07-14 01:20:27.175143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.934 [2024-07-14 01:20:27.175157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.934 [2024-07-14 01:20:27.175170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.934 [2024-07-14 01:20:27.175198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.934 qpair failed and we were unable to recover it. 00:34:37.934 [2024-07-14 01:20:27.184890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.934 [2024-07-14 01:20:27.185038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.934 [2024-07-14 01:20:27.185063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.934 [2024-07-14 01:20:27.185084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.934 [2024-07-14 01:20:27.185099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.934 [2024-07-14 01:20:27.185127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.934 qpair failed and we were unable to recover it. 00:34:37.934 [2024-07-14 01:20:27.194924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.934 [2024-07-14 01:20:27.195115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.934 [2024-07-14 01:20:27.195140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.934 [2024-07-14 01:20:27.195154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.934 [2024-07-14 01:20:27.195165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.934 [2024-07-14 01:20:27.195193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.934 qpair failed and we were unable to recover it. 00:34:37.934 [2024-07-14 01:20:27.204986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.934 [2024-07-14 01:20:27.205136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.934 [2024-07-14 01:20:27.205161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.934 [2024-07-14 01:20:27.205175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.934 [2024-07-14 01:20:27.205188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.934 [2024-07-14 01:20:27.205216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.934 qpair failed and we were unable to recover it. 00:34:37.934 [2024-07-14 01:20:27.214978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.934 [2024-07-14 01:20:27.215124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.934 [2024-07-14 01:20:27.215149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.934 [2024-07-14 01:20:27.215164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.934 [2024-07-14 01:20:27.215177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.934 [2024-07-14 01:20:27.215205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.934 qpair failed and we were unable to recover it. 00:34:37.934 [2024-07-14 01:20:27.225072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.934 [2024-07-14 01:20:27.225241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.934 [2024-07-14 01:20:27.225266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.934 [2024-07-14 01:20:27.225280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.934 [2024-07-14 01:20:27.225293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.934 [2024-07-14 01:20:27.225322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.934 qpair failed and we were unable to recover it. 00:34:37.934 [2024-07-14 01:20:27.235020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.934 [2024-07-14 01:20:27.235166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.934 [2024-07-14 01:20:27.235191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.934 [2024-07-14 01:20:27.235205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.934 [2024-07-14 01:20:27.235218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.934 [2024-07-14 01:20:27.235245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.934 qpair failed and we were unable to recover it. 00:34:37.934 [2024-07-14 01:20:27.245082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.934 [2024-07-14 01:20:27.245272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.934 [2024-07-14 01:20:27.245295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.934 [2024-07-14 01:20:27.245308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.934 [2024-07-14 01:20:27.245320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.934 [2024-07-14 01:20:27.245347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.934 qpair failed and we were unable to recover it. 00:34:37.934 [2024-07-14 01:20:27.255109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.934 [2024-07-14 01:20:27.255262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.934 [2024-07-14 01:20:27.255287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.934 [2024-07-14 01:20:27.255301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.934 [2024-07-14 01:20:27.255314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.934 [2024-07-14 01:20:27.255340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.934 qpair failed and we were unable to recover it. 00:34:37.934 [2024-07-14 01:20:27.265133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.934 [2024-07-14 01:20:27.265295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.934 [2024-07-14 01:20:27.265321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.934 [2024-07-14 01:20:27.265335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.934 [2024-07-14 01:20:27.265348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.934 [2024-07-14 01:20:27.265375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.934 qpair failed and we were unable to recover it. 00:34:37.934 [2024-07-14 01:20:27.275194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.934 [2024-07-14 01:20:27.275358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.935 [2024-07-14 01:20:27.275384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.935 [2024-07-14 01:20:27.275404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.935 [2024-07-14 01:20:27.275418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.935 [2024-07-14 01:20:27.275445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.935 qpair failed and we were unable to recover it. 00:34:37.935 [2024-07-14 01:20:27.285211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.935 [2024-07-14 01:20:27.285365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.935 [2024-07-14 01:20:27.285391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.935 [2024-07-14 01:20:27.285405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.935 [2024-07-14 01:20:27.285418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.935 [2024-07-14 01:20:27.285446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.935 qpair failed and we were unable to recover it. 00:34:37.935 [2024-07-14 01:20:27.295318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.935 [2024-07-14 01:20:27.295470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.935 [2024-07-14 01:20:27.295496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.935 [2024-07-14 01:20:27.295510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.935 [2024-07-14 01:20:27.295523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.935 [2024-07-14 01:20:27.295552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.935 qpair failed and we were unable to recover it. 00:34:37.935 [2024-07-14 01:20:27.305224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.935 [2024-07-14 01:20:27.305387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.935 [2024-07-14 01:20:27.305413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.935 [2024-07-14 01:20:27.305427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.935 [2024-07-14 01:20:27.305440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.935 [2024-07-14 01:20:27.305468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.935 qpair failed and we were unable to recover it. 00:34:37.935 [2024-07-14 01:20:27.315256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.935 [2024-07-14 01:20:27.315396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.935 [2024-07-14 01:20:27.315421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.935 [2024-07-14 01:20:27.315435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.935 [2024-07-14 01:20:27.315448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.935 [2024-07-14 01:20:27.315475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.935 qpair failed and we were unable to recover it. 00:34:37.935 [2024-07-14 01:20:27.325345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.935 [2024-07-14 01:20:27.325532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.935 [2024-07-14 01:20:27.325557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.935 [2024-07-14 01:20:27.325571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.935 [2024-07-14 01:20:27.325584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.935 [2024-07-14 01:20:27.325612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.935 qpair failed and we were unable to recover it. 00:34:37.935 [2024-07-14 01:20:27.335365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.935 [2024-07-14 01:20:27.335559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.935 [2024-07-14 01:20:27.335585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.935 [2024-07-14 01:20:27.335600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.935 [2024-07-14 01:20:27.335612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.935 [2024-07-14 01:20:27.335639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.935 qpair failed and we were unable to recover it. 00:34:37.935 [2024-07-14 01:20:27.345410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:37.935 [2024-07-14 01:20:27.345560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:37.935 [2024-07-14 01:20:27.345586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:37.935 [2024-07-14 01:20:27.345600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:37.935 [2024-07-14 01:20:27.345612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:37.935 [2024-07-14 01:20:27.345640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:37.935 qpair failed and we were unable to recover it. 00:34:38.195 [2024-07-14 01:20:27.355475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.195 [2024-07-14 01:20:27.355621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.195 [2024-07-14 01:20:27.355647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.195 [2024-07-14 01:20:27.355667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.195 [2024-07-14 01:20:27.355681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.195 [2024-07-14 01:20:27.355709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.195 qpair failed and we were unable to recover it. 00:34:38.195 [2024-07-14 01:20:27.365441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.195 [2024-07-14 01:20:27.365629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.195 [2024-07-14 01:20:27.365655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.196 [2024-07-14 01:20:27.365675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.196 [2024-07-14 01:20:27.365688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.196 [2024-07-14 01:20:27.365716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.196 qpair failed and we were unable to recover it. 00:34:38.196 [2024-07-14 01:20:27.375482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.196 [2024-07-14 01:20:27.375648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.196 [2024-07-14 01:20:27.375674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.196 [2024-07-14 01:20:27.375689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.196 [2024-07-14 01:20:27.375702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.196 [2024-07-14 01:20:27.375729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.196 qpair failed and we were unable to recover it. 00:34:38.196 [2024-07-14 01:20:27.385528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.196 [2024-07-14 01:20:27.385713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.196 [2024-07-14 01:20:27.385738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.196 [2024-07-14 01:20:27.385752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.196 [2024-07-14 01:20:27.385765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.196 [2024-07-14 01:20:27.385792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.196 qpair failed and we were unable to recover it. 00:34:38.196 [2024-07-14 01:20:27.395528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.196 [2024-07-14 01:20:27.395670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.196 [2024-07-14 01:20:27.395696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.196 [2024-07-14 01:20:27.395710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.196 [2024-07-14 01:20:27.395723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.196 [2024-07-14 01:20:27.395750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.196 qpair failed and we were unable to recover it. 00:34:38.196 [2024-07-14 01:20:27.405593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.196 [2024-07-14 01:20:27.405749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.196 [2024-07-14 01:20:27.405774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.196 [2024-07-14 01:20:27.405789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.196 [2024-07-14 01:20:27.405802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.196 [2024-07-14 01:20:27.405829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.196 qpair failed and we were unable to recover it. 00:34:38.196 [2024-07-14 01:20:27.415576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.196 [2024-07-14 01:20:27.415745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.196 [2024-07-14 01:20:27.415771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.196 [2024-07-14 01:20:27.415785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.196 [2024-07-14 01:20:27.415798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.196 [2024-07-14 01:20:27.415826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.196 qpair failed and we were unable to recover it. 00:34:38.196 [2024-07-14 01:20:27.425713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.196 [2024-07-14 01:20:27.425884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.196 [2024-07-14 01:20:27.425910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.196 [2024-07-14 01:20:27.425924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.196 [2024-07-14 01:20:27.425937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.196 [2024-07-14 01:20:27.425965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.196 qpair failed and we were unable to recover it. 00:34:38.196 [2024-07-14 01:20:27.435694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.196 [2024-07-14 01:20:27.435878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.196 [2024-07-14 01:20:27.435904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.196 [2024-07-14 01:20:27.435919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.196 [2024-07-14 01:20:27.435931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.196 [2024-07-14 01:20:27.435959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.196 qpair failed and we were unable to recover it. 00:34:38.196 [2024-07-14 01:20:27.445731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.196 [2024-07-14 01:20:27.445923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.196 [2024-07-14 01:20:27.445948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.196 [2024-07-14 01:20:27.445962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.196 [2024-07-14 01:20:27.445975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.196 [2024-07-14 01:20:27.446003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.196 qpair failed and we were unable to recover it. 00:34:38.196 [2024-07-14 01:20:27.455744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.196 [2024-07-14 01:20:27.455897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.196 [2024-07-14 01:20:27.455927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.196 [2024-07-14 01:20:27.455942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.196 [2024-07-14 01:20:27.455956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.196 [2024-07-14 01:20:27.455983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.196 qpair failed and we were unable to recover it. 00:34:38.196 [2024-07-14 01:20:27.465757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.196 [2024-07-14 01:20:27.465903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.196 [2024-07-14 01:20:27.465928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.196 [2024-07-14 01:20:27.465942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.196 [2024-07-14 01:20:27.465954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.196 [2024-07-14 01:20:27.465982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.196 qpair failed and we were unable to recover it. 00:34:38.196 [2024-07-14 01:20:27.475803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.196 [2024-07-14 01:20:27.475991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.196 [2024-07-14 01:20:27.476016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.196 [2024-07-14 01:20:27.476030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.196 [2024-07-14 01:20:27.476043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.196 [2024-07-14 01:20:27.476070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.196 qpair failed and we were unable to recover it. 00:34:38.196 [2024-07-14 01:20:27.485789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.196 [2024-07-14 01:20:27.485955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.196 [2024-07-14 01:20:27.485980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.196 [2024-07-14 01:20:27.485994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.196 [2024-07-14 01:20:27.486007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.196 [2024-07-14 01:20:27.486034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.196 qpair failed and we were unable to recover it. 00:34:38.196 [2024-07-14 01:20:27.495805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.196 [2024-07-14 01:20:27.495953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.196 [2024-07-14 01:20:27.495979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.196 [2024-07-14 01:20:27.495993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.196 [2024-07-14 01:20:27.496006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.196 [2024-07-14 01:20:27.496039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.196 qpair failed and we were unable to recover it. 00:34:38.196 [2024-07-14 01:20:27.505836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.196 [2024-07-14 01:20:27.505989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.197 [2024-07-14 01:20:27.506013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.197 [2024-07-14 01:20:27.506027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.197 [2024-07-14 01:20:27.506040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.197 [2024-07-14 01:20:27.506067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.197 qpair failed and we were unable to recover it. 00:34:38.197 [2024-07-14 01:20:27.515940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.197 [2024-07-14 01:20:27.516138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.197 [2024-07-14 01:20:27.516163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.197 [2024-07-14 01:20:27.516177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.197 [2024-07-14 01:20:27.516190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.197 [2024-07-14 01:20:27.516217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.197 qpair failed and we were unable to recover it. 00:34:38.197 [2024-07-14 01:20:27.525920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.197 [2024-07-14 01:20:27.526071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.197 [2024-07-14 01:20:27.526096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.197 [2024-07-14 01:20:27.526111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.197 [2024-07-14 01:20:27.526123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.197 [2024-07-14 01:20:27.526151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.197 qpair failed and we were unable to recover it. 00:34:38.197 [2024-07-14 01:20:27.535922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.197 [2024-07-14 01:20:27.536072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.197 [2024-07-14 01:20:27.536098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.197 [2024-07-14 01:20:27.536111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.197 [2024-07-14 01:20:27.536126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.197 [2024-07-14 01:20:27.536153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.197 qpair failed and we were unable to recover it. 00:34:38.197 [2024-07-14 01:20:27.545993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.197 [2024-07-14 01:20:27.546150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.197 [2024-07-14 01:20:27.546180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.197 [2024-07-14 01:20:27.546195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.197 [2024-07-14 01:20:27.546208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.197 [2024-07-14 01:20:27.546235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.197 qpair failed and we were unable to recover it. 00:34:38.197 [2024-07-14 01:20:27.555983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.197 [2024-07-14 01:20:27.556134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.197 [2024-07-14 01:20:27.556159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.197 [2024-07-14 01:20:27.556173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.197 [2024-07-14 01:20:27.556188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.197 [2024-07-14 01:20:27.556216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.197 qpair failed and we were unable to recover it. 00:34:38.197 [2024-07-14 01:20:27.566019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.197 [2024-07-14 01:20:27.566170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.197 [2024-07-14 01:20:27.566195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.197 [2024-07-14 01:20:27.566209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.197 [2024-07-14 01:20:27.566222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.197 [2024-07-14 01:20:27.566249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.197 qpair failed and we were unable to recover it. 00:34:38.197 [2024-07-14 01:20:27.576060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.197 [2024-07-14 01:20:27.576210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.197 [2024-07-14 01:20:27.576235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.197 [2024-07-14 01:20:27.576249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.197 [2024-07-14 01:20:27.576262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.197 [2024-07-14 01:20:27.576289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.197 qpair failed and we were unable to recover it. 00:34:38.197 [2024-07-14 01:20:27.586066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.197 [2024-07-14 01:20:27.586210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.197 [2024-07-14 01:20:27.586235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.197 [2024-07-14 01:20:27.586248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.197 [2024-07-14 01:20:27.586260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.197 [2024-07-14 01:20:27.586294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.197 qpair failed and we were unable to recover it. 00:34:38.197 [2024-07-14 01:20:27.596104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.197 [2024-07-14 01:20:27.596256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.197 [2024-07-14 01:20:27.596280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.197 [2024-07-14 01:20:27.596294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.197 [2024-07-14 01:20:27.596307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.197 [2024-07-14 01:20:27.596334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.197 qpair failed and we were unable to recover it. 00:34:38.197 [2024-07-14 01:20:27.606134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.197 [2024-07-14 01:20:27.606285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.197 [2024-07-14 01:20:27.606310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.197 [2024-07-14 01:20:27.606324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.197 [2024-07-14 01:20:27.606336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.197 [2024-07-14 01:20:27.606364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.197 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-14 01:20:27.616180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.459 [2024-07-14 01:20:27.616357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.459 [2024-07-14 01:20:27.616383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.459 [2024-07-14 01:20:27.616397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.459 [2024-07-14 01:20:27.616411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.459 [2024-07-14 01:20:27.616439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-14 01:20:27.626165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.459 [2024-07-14 01:20:27.626306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.459 [2024-07-14 01:20:27.626331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.459 [2024-07-14 01:20:27.626345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.459 [2024-07-14 01:20:27.626358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.459 [2024-07-14 01:20:27.626385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-14 01:20:27.636251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.459 [2024-07-14 01:20:27.636428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.459 [2024-07-14 01:20:27.636458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.459 [2024-07-14 01:20:27.636473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.459 [2024-07-14 01:20:27.636486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.459 [2024-07-14 01:20:27.636514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-14 01:20:27.646241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.459 [2024-07-14 01:20:27.646388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.459 [2024-07-14 01:20:27.646413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.459 [2024-07-14 01:20:27.646427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.459 [2024-07-14 01:20:27.646440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.459 [2024-07-14 01:20:27.646467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-14 01:20:27.656262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.459 [2024-07-14 01:20:27.656409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.459 [2024-07-14 01:20:27.656434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.459 [2024-07-14 01:20:27.656448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.459 [2024-07-14 01:20:27.656461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.459 [2024-07-14 01:20:27.656488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-14 01:20:27.666321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.459 [2024-07-14 01:20:27.666471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.459 [2024-07-14 01:20:27.666495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.459 [2024-07-14 01:20:27.666509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.459 [2024-07-14 01:20:27.666522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.459 [2024-07-14 01:20:27.666550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-14 01:20:27.676417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.459 [2024-07-14 01:20:27.676563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.459 [2024-07-14 01:20:27.676588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.459 [2024-07-14 01:20:27.676602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.459 [2024-07-14 01:20:27.676615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.459 [2024-07-14 01:20:27.676649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-14 01:20:27.686386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.459 [2024-07-14 01:20:27.686534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.459 [2024-07-14 01:20:27.686559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.459 [2024-07-14 01:20:27.686573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.459 [2024-07-14 01:20:27.686587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.459 [2024-07-14 01:20:27.686614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-14 01:20:27.696405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.459 [2024-07-14 01:20:27.696563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.459 [2024-07-14 01:20:27.696588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.459 [2024-07-14 01:20:27.696602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.459 [2024-07-14 01:20:27.696615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.459 [2024-07-14 01:20:27.696642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-14 01:20:27.706453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.459 [2024-07-14 01:20:27.706659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.459 [2024-07-14 01:20:27.706684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.459 [2024-07-14 01:20:27.706698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.459 [2024-07-14 01:20:27.706712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.459 [2024-07-14 01:20:27.706739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-14 01:20:27.716471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.459 [2024-07-14 01:20:27.716625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.459 [2024-07-14 01:20:27.716650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.459 [2024-07-14 01:20:27.716664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.459 [2024-07-14 01:20:27.716676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.459 [2024-07-14 01:20:27.716704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-14 01:20:27.726514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.459 [2024-07-14 01:20:27.726672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.459 [2024-07-14 01:20:27.726706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.459 [2024-07-14 01:20:27.726721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.459 [2024-07-14 01:20:27.726734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.459 [2024-07-14 01:20:27.726761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.459 qpair failed and we were unable to recover it. 00:34:38.459 [2024-07-14 01:20:27.736495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.459 [2024-07-14 01:20:27.736644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.459 [2024-07-14 01:20:27.736670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.459 [2024-07-14 01:20:27.736684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.459 [2024-07-14 01:20:27.736697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.460 [2024-07-14 01:20:27.736724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-14 01:20:27.746504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.460 [2024-07-14 01:20:27.746643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.460 [2024-07-14 01:20:27.746668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.460 [2024-07-14 01:20:27.746682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.460 [2024-07-14 01:20:27.746695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.460 [2024-07-14 01:20:27.746723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-14 01:20:27.756553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.460 [2024-07-14 01:20:27.756698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.460 [2024-07-14 01:20:27.756724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.460 [2024-07-14 01:20:27.756738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.460 [2024-07-14 01:20:27.756751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.460 [2024-07-14 01:20:27.756779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-14 01:20:27.766599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.460 [2024-07-14 01:20:27.766759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.460 [2024-07-14 01:20:27.766784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.460 [2024-07-14 01:20:27.766798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.460 [2024-07-14 01:20:27.766816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.460 [2024-07-14 01:20:27.766844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-14 01:20:27.776630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.460 [2024-07-14 01:20:27.776784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.460 [2024-07-14 01:20:27.776809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.460 [2024-07-14 01:20:27.776822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.460 [2024-07-14 01:20:27.776836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.460 [2024-07-14 01:20:27.776863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-14 01:20:27.786643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.460 [2024-07-14 01:20:27.786792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.460 [2024-07-14 01:20:27.786816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.460 [2024-07-14 01:20:27.786830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.460 [2024-07-14 01:20:27.786844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.460 [2024-07-14 01:20:27.786878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-14 01:20:27.796681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.460 [2024-07-14 01:20:27.796828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.460 [2024-07-14 01:20:27.796853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.460 [2024-07-14 01:20:27.796875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.460 [2024-07-14 01:20:27.796890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.460 [2024-07-14 01:20:27.796918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-14 01:20:27.806740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.460 [2024-07-14 01:20:27.806906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.460 [2024-07-14 01:20:27.806931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.460 [2024-07-14 01:20:27.806945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.460 [2024-07-14 01:20:27.806958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.460 [2024-07-14 01:20:27.806986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-14 01:20:27.816731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.460 [2024-07-14 01:20:27.816894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.460 [2024-07-14 01:20:27.816919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.460 [2024-07-14 01:20:27.816933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.460 [2024-07-14 01:20:27.816945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.460 [2024-07-14 01:20:27.816973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-14 01:20:27.826806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.460 [2024-07-14 01:20:27.826959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.460 [2024-07-14 01:20:27.826984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.460 [2024-07-14 01:20:27.826998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.460 [2024-07-14 01:20:27.827012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.460 [2024-07-14 01:20:27.827039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-14 01:20:27.836785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.460 [2024-07-14 01:20:27.836938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.460 [2024-07-14 01:20:27.836963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.460 [2024-07-14 01:20:27.836977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.460 [2024-07-14 01:20:27.836990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.460 [2024-07-14 01:20:27.837017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-14 01:20:27.846864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.460 [2024-07-14 01:20:27.847026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.460 [2024-07-14 01:20:27.847051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.460 [2024-07-14 01:20:27.847065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.460 [2024-07-14 01:20:27.847078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.460 [2024-07-14 01:20:27.847106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-14 01:20:27.856853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.460 [2024-07-14 01:20:27.857010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.460 [2024-07-14 01:20:27.857035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.460 [2024-07-14 01:20:27.857049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.460 [2024-07-14 01:20:27.857067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.460 [2024-07-14 01:20:27.857095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.460 [2024-07-14 01:20:27.866897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.460 [2024-07-14 01:20:27.867080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.460 [2024-07-14 01:20:27.867107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.460 [2024-07-14 01:20:27.867121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.460 [2024-07-14 01:20:27.867134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.460 [2024-07-14 01:20:27.867162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.460 qpair failed and we were unable to recover it. 00:34:38.726 [2024-07-14 01:20:27.876910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.726 [2024-07-14 01:20:27.877057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.726 [2024-07-14 01:20:27.877083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.726 [2024-07-14 01:20:27.877098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.726 [2024-07-14 01:20:27.877111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.726 [2024-07-14 01:20:27.877139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.726 qpair failed and we were unable to recover it. 00:34:38.726 [2024-07-14 01:20:27.886971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.726 [2024-07-14 01:20:27.887121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.726 [2024-07-14 01:20:27.887146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.726 [2024-07-14 01:20:27.887160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.726 [2024-07-14 01:20:27.887173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.726 [2024-07-14 01:20:27.887200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.726 qpair failed and we were unable to recover it. 00:34:38.726 [2024-07-14 01:20:27.896972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.726 [2024-07-14 01:20:27.897137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.726 [2024-07-14 01:20:27.897162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.726 [2024-07-14 01:20:27.897176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.726 [2024-07-14 01:20:27.897189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.726 [2024-07-14 01:20:27.897216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.726 qpair failed and we were unable to recover it. 00:34:38.726 [2024-07-14 01:20:27.907037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.726 [2024-07-14 01:20:27.907191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.726 [2024-07-14 01:20:27.907216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.726 [2024-07-14 01:20:27.907229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.726 [2024-07-14 01:20:27.907241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.726 [2024-07-14 01:20:27.907269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.726 qpair failed and we were unable to recover it. 00:34:38.726 [2024-07-14 01:20:27.917048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.726 [2024-07-14 01:20:27.917192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.726 [2024-07-14 01:20:27.917217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.726 [2024-07-14 01:20:27.917230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.726 [2024-07-14 01:20:27.917243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.726 [2024-07-14 01:20:27.917270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.726 qpair failed and we were unable to recover it. 00:34:38.726 [2024-07-14 01:20:27.927061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.726 [2024-07-14 01:20:27.927210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.726 [2024-07-14 01:20:27.927235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.726 [2024-07-14 01:20:27.927249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.726 [2024-07-14 01:20:27.927262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.726 [2024-07-14 01:20:27.927289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.726 qpair failed and we were unable to recover it. 00:34:38.726 [2024-07-14 01:20:27.937074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.726 [2024-07-14 01:20:27.937224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.726 [2024-07-14 01:20:27.937249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.726 [2024-07-14 01:20:27.937264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.726 [2024-07-14 01:20:27.937277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.726 [2024-07-14 01:20:27.937304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.726 qpair failed and we were unable to recover it. 00:34:38.726 [2024-07-14 01:20:27.947103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.726 [2024-07-14 01:20:27.947241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.726 [2024-07-14 01:20:27.947266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.726 [2024-07-14 01:20:27.947287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.726 [2024-07-14 01:20:27.947301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.726 [2024-07-14 01:20:27.947330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.726 qpair failed and we were unable to recover it. 00:34:38.726 [2024-07-14 01:20:27.957164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.726 [2024-07-14 01:20:27.957356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.726 [2024-07-14 01:20:27.957381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.726 [2024-07-14 01:20:27.957395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.726 [2024-07-14 01:20:27.957407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.726 [2024-07-14 01:20:27.957434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.726 qpair failed and we were unable to recover it. 00:34:38.726 [2024-07-14 01:20:27.967211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.726 [2024-07-14 01:20:27.967362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.726 [2024-07-14 01:20:27.967387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.726 [2024-07-14 01:20:27.967401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.726 [2024-07-14 01:20:27.967414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.726 [2024-07-14 01:20:27.967441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.726 qpair failed and we were unable to recover it. 00:34:38.726 [2024-07-14 01:20:27.977172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.726 [2024-07-14 01:20:27.977329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.726 [2024-07-14 01:20:27.977354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.726 [2024-07-14 01:20:27.977367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.726 [2024-07-14 01:20:27.977380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.726 [2024-07-14 01:20:27.977408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.726 qpair failed and we were unable to recover it. 00:34:38.726 [2024-07-14 01:20:27.987199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.727 [2024-07-14 01:20:27.987344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.727 [2024-07-14 01:20:27.987369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.727 [2024-07-14 01:20:27.987383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.727 [2024-07-14 01:20:27.987396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.727 [2024-07-14 01:20:27.987424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.727 qpair failed and we were unable to recover it. 00:34:38.727 [2024-07-14 01:20:27.997324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.727 [2024-07-14 01:20:27.997474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.727 [2024-07-14 01:20:27.997499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.727 [2024-07-14 01:20:27.997513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.727 [2024-07-14 01:20:27.997526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.727 [2024-07-14 01:20:27.997553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.727 qpair failed and we were unable to recover it. 00:34:38.727 [2024-07-14 01:20:28.007287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.727 [2024-07-14 01:20:28.007467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.727 [2024-07-14 01:20:28.007491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.727 [2024-07-14 01:20:28.007505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.727 [2024-07-14 01:20:28.007518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.727 [2024-07-14 01:20:28.007545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.727 qpair failed and we were unable to recover it. 00:34:38.727 [2024-07-14 01:20:28.017309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.727 [2024-07-14 01:20:28.017454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.727 [2024-07-14 01:20:28.017479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.727 [2024-07-14 01:20:28.017493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.727 [2024-07-14 01:20:28.017507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.727 [2024-07-14 01:20:28.017534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.727 qpair failed and we were unable to recover it. 00:34:38.727 [2024-07-14 01:20:28.027340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.727 [2024-07-14 01:20:28.027487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.727 [2024-07-14 01:20:28.027512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.727 [2024-07-14 01:20:28.027526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.727 [2024-07-14 01:20:28.027539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.727 [2024-07-14 01:20:28.027565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.727 qpair failed and we were unable to recover it. 00:34:38.727 [2024-07-14 01:20:28.037387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.727 [2024-07-14 01:20:28.037526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.727 [2024-07-14 01:20:28.037551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.727 [2024-07-14 01:20:28.037572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.727 [2024-07-14 01:20:28.037585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.727 [2024-07-14 01:20:28.037615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.727 qpair failed and we were unable to recover it. 00:34:38.727 [2024-07-14 01:20:28.047413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.727 [2024-07-14 01:20:28.047571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.727 [2024-07-14 01:20:28.047595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.727 [2024-07-14 01:20:28.047610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.727 [2024-07-14 01:20:28.047622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.727 [2024-07-14 01:20:28.047650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.727 qpair failed and we were unable to recover it. 00:34:38.727 [2024-07-14 01:20:28.057444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.727 [2024-07-14 01:20:28.057599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.727 [2024-07-14 01:20:28.057624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.727 [2024-07-14 01:20:28.057638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.727 [2024-07-14 01:20:28.057651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.727 [2024-07-14 01:20:28.057678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.727 qpair failed and we were unable to recover it. 00:34:38.727 [2024-07-14 01:20:28.067453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.727 [2024-07-14 01:20:28.067648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.727 [2024-07-14 01:20:28.067675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.727 [2024-07-14 01:20:28.067694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.727 [2024-07-14 01:20:28.067708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.727 [2024-07-14 01:20:28.067736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.727 qpair failed and we were unable to recover it. 00:34:38.727 [2024-07-14 01:20:28.077518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.727 [2024-07-14 01:20:28.077717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.727 [2024-07-14 01:20:28.077742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.727 [2024-07-14 01:20:28.077756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.727 [2024-07-14 01:20:28.077769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.727 [2024-07-14 01:20:28.077796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.727 qpair failed and we were unable to recover it. 00:34:38.727 [2024-07-14 01:20:28.087553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.727 [2024-07-14 01:20:28.087750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.727 [2024-07-14 01:20:28.087775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.727 [2024-07-14 01:20:28.087789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.727 [2024-07-14 01:20:28.087801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.727 [2024-07-14 01:20:28.087829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.727 qpair failed and we were unable to recover it. 00:34:38.727 [2024-07-14 01:20:28.097563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.727 [2024-07-14 01:20:28.097710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.727 [2024-07-14 01:20:28.097736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.727 [2024-07-14 01:20:28.097752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.727 [2024-07-14 01:20:28.097765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.727 [2024-07-14 01:20:28.097792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.727 qpair failed and we were unable to recover it. 00:34:38.727 [2024-07-14 01:20:28.107569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.727 [2024-07-14 01:20:28.107726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.727 [2024-07-14 01:20:28.107751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.727 [2024-07-14 01:20:28.107765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.727 [2024-07-14 01:20:28.107778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.727 [2024-07-14 01:20:28.107805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.727 qpair failed and we were unable to recover it. 00:34:38.727 [2024-07-14 01:20:28.117599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.727 [2024-07-14 01:20:28.117747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.727 [2024-07-14 01:20:28.117772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.727 [2024-07-14 01:20:28.117785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.727 [2024-07-14 01:20:28.117799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.727 [2024-07-14 01:20:28.117826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.727 qpair failed and we were unable to recover it. 00:34:38.727 [2024-07-14 01:20:28.127617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.727 [2024-07-14 01:20:28.127780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.727 [2024-07-14 01:20:28.127805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.728 [2024-07-14 01:20:28.127826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.728 [2024-07-14 01:20:28.127841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.728 [2024-07-14 01:20:28.127876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.728 qpair failed and we were unable to recover it. 00:34:38.728 [2024-07-14 01:20:28.137731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.728 [2024-07-14 01:20:28.137923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.728 [2024-07-14 01:20:28.137950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.728 [2024-07-14 01:20:28.137964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.728 [2024-07-14 01:20:28.137977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.728 [2024-07-14 01:20:28.138005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.728 qpair failed and we were unable to recover it. 00:34:38.989 [2024-07-14 01:20:28.147693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.989 [2024-07-14 01:20:28.147836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.989 [2024-07-14 01:20:28.147862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.989 [2024-07-14 01:20:28.147884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.989 [2024-07-14 01:20:28.147898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.989 [2024-07-14 01:20:28.147926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.989 qpair failed and we were unable to recover it. 00:34:38.989 [2024-07-14 01:20:28.157700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.989 [2024-07-14 01:20:28.157840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.989 [2024-07-14 01:20:28.157874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.989 [2024-07-14 01:20:28.157892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.989 [2024-07-14 01:20:28.157904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.989 [2024-07-14 01:20:28.157933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.989 qpair failed and we were unable to recover it. 00:34:38.989 [2024-07-14 01:20:28.167744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.989 [2024-07-14 01:20:28.167903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.989 [2024-07-14 01:20:28.167928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.989 [2024-07-14 01:20:28.167942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.989 [2024-07-14 01:20:28.167954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.989 [2024-07-14 01:20:28.167983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.989 qpair failed and we were unable to recover it. 00:34:38.989 [2024-07-14 01:20:28.177802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.989 [2024-07-14 01:20:28.177990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.989 [2024-07-14 01:20:28.178015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.989 [2024-07-14 01:20:28.178029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.989 [2024-07-14 01:20:28.178041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.989 [2024-07-14 01:20:28.178069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.989 qpair failed and we were unable to recover it. 00:34:38.989 [2024-07-14 01:20:28.187772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.989 [2024-07-14 01:20:28.187967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.989 [2024-07-14 01:20:28.187993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.989 [2024-07-14 01:20:28.188006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.989 [2024-07-14 01:20:28.188018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.989 [2024-07-14 01:20:28.188046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.989 qpair failed and we were unable to recover it. 00:34:38.989 [2024-07-14 01:20:28.197846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.989 [2024-07-14 01:20:28.197999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.989 [2024-07-14 01:20:28.198024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.989 [2024-07-14 01:20:28.198038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.989 [2024-07-14 01:20:28.198050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.989 [2024-07-14 01:20:28.198077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.989 qpair failed and we were unable to recover it. 00:34:38.989 [2024-07-14 01:20:28.207844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.989 [2024-07-14 01:20:28.208004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.989 [2024-07-14 01:20:28.208029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.989 [2024-07-14 01:20:28.208043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.989 [2024-07-14 01:20:28.208056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.989 [2024-07-14 01:20:28.208083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.989 qpair failed and we were unable to recover it. 00:34:38.989 [2024-07-14 01:20:28.217946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.989 [2024-07-14 01:20:28.218117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.989 [2024-07-14 01:20:28.218149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.989 [2024-07-14 01:20:28.218166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.989 [2024-07-14 01:20:28.218179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.989 [2024-07-14 01:20:28.218207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.989 qpair failed and we were unable to recover it. 00:34:38.989 [2024-07-14 01:20:28.227897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.989 [2024-07-14 01:20:28.228047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.989 [2024-07-14 01:20:28.228072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.989 [2024-07-14 01:20:28.228086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.989 [2024-07-14 01:20:28.228099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.989 [2024-07-14 01:20:28.228127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.989 qpair failed and we were unable to recover it. 00:34:38.989 [2024-07-14 01:20:28.237944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.989 [2024-07-14 01:20:28.238088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.990 [2024-07-14 01:20:28.238114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.990 [2024-07-14 01:20:28.238128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.990 [2024-07-14 01:20:28.238141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.990 [2024-07-14 01:20:28.238168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.990 qpair failed and we were unable to recover it. 00:34:38.990 [2024-07-14 01:20:28.248000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.990 [2024-07-14 01:20:28.248184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.990 [2024-07-14 01:20:28.248208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.990 [2024-07-14 01:20:28.248221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.990 [2024-07-14 01:20:28.248232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.990 [2024-07-14 01:20:28.248259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.990 qpair failed and we were unable to recover it. 00:34:38.990 [2024-07-14 01:20:28.257975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.990 [2024-07-14 01:20:28.258151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.990 [2024-07-14 01:20:28.258176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.990 [2024-07-14 01:20:28.258190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.990 [2024-07-14 01:20:28.258202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.990 [2024-07-14 01:20:28.258230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.990 qpair failed and we were unable to recover it. 00:34:38.990 [2024-07-14 01:20:28.268009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.990 [2024-07-14 01:20:28.268155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.990 [2024-07-14 01:20:28.268180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.990 [2024-07-14 01:20:28.268194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.990 [2024-07-14 01:20:28.268207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.990 [2024-07-14 01:20:28.268235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.990 qpair failed and we were unable to recover it. 00:34:38.990 [2024-07-14 01:20:28.278068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.990 [2024-07-14 01:20:28.278219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.990 [2024-07-14 01:20:28.278246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.990 [2024-07-14 01:20:28.278269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.990 [2024-07-14 01:20:28.278284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.990 [2024-07-14 01:20:28.278313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.990 qpair failed and we were unable to recover it. 00:34:38.990 [2024-07-14 01:20:28.288094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.990 [2024-07-14 01:20:28.288246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.990 [2024-07-14 01:20:28.288271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.990 [2024-07-14 01:20:28.288285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.990 [2024-07-14 01:20:28.288298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.990 [2024-07-14 01:20:28.288326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.990 qpair failed and we were unable to recover it. 00:34:38.990 [2024-07-14 01:20:28.298101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.990 [2024-07-14 01:20:28.298258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.990 [2024-07-14 01:20:28.298284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.990 [2024-07-14 01:20:28.298297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.990 [2024-07-14 01:20:28.298311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.990 [2024-07-14 01:20:28.298339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.990 qpair failed and we were unable to recover it. 00:34:38.990 [2024-07-14 01:20:28.308113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.990 [2024-07-14 01:20:28.308267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.990 [2024-07-14 01:20:28.308298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.990 [2024-07-14 01:20:28.308313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.990 [2024-07-14 01:20:28.308326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.990 [2024-07-14 01:20:28.308353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.990 qpair failed and we were unable to recover it. 00:34:38.990 [2024-07-14 01:20:28.318185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.990 [2024-07-14 01:20:28.318337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.990 [2024-07-14 01:20:28.318364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.990 [2024-07-14 01:20:28.318384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.990 [2024-07-14 01:20:28.318397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.990 [2024-07-14 01:20:28.318426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.990 qpair failed and we were unable to recover it. 00:34:38.990 [2024-07-14 01:20:28.328255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.990 [2024-07-14 01:20:28.328446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.990 [2024-07-14 01:20:28.328472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.990 [2024-07-14 01:20:28.328486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.990 [2024-07-14 01:20:28.328499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.990 [2024-07-14 01:20:28.328527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.990 qpair failed and we were unable to recover it. 00:34:38.990 [2024-07-14 01:20:28.338199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.990 [2024-07-14 01:20:28.338348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.990 [2024-07-14 01:20:28.338374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.990 [2024-07-14 01:20:28.338388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.990 [2024-07-14 01:20:28.338400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.990 [2024-07-14 01:20:28.338428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.990 qpair failed and we were unable to recover it. 00:34:38.990 [2024-07-14 01:20:28.348287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.990 [2024-07-14 01:20:28.348483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.990 [2024-07-14 01:20:28.348508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.990 [2024-07-14 01:20:28.348522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.990 [2024-07-14 01:20:28.348535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.990 [2024-07-14 01:20:28.348568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.990 qpair failed and we were unable to recover it. 00:34:38.990 [2024-07-14 01:20:28.358306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.990 [2024-07-14 01:20:28.358463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.990 [2024-07-14 01:20:28.358488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.990 [2024-07-14 01:20:28.358502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.990 [2024-07-14 01:20:28.358515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.990 [2024-07-14 01:20:28.358543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.990 qpair failed and we were unable to recover it. 00:34:38.990 [2024-07-14 01:20:28.368303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.990 [2024-07-14 01:20:28.368452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.990 [2024-07-14 01:20:28.368477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.990 [2024-07-14 01:20:28.368492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.990 [2024-07-14 01:20:28.368505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.990 [2024-07-14 01:20:28.368534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.990 qpair failed and we were unable to recover it. 00:34:38.990 [2024-07-14 01:20:28.378372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.990 [2024-07-14 01:20:28.378527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.990 [2024-07-14 01:20:28.378552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.990 [2024-07-14 01:20:28.378567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.991 [2024-07-14 01:20:28.378579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.991 [2024-07-14 01:20:28.378607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.991 qpair failed and we were unable to recover it. 00:34:38.991 [2024-07-14 01:20:28.388345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.991 [2024-07-14 01:20:28.388498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.991 [2024-07-14 01:20:28.388523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.991 [2024-07-14 01:20:28.388537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.991 [2024-07-14 01:20:28.388550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.991 [2024-07-14 01:20:28.388577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.991 qpair failed and we were unable to recover it. 00:34:38.991 [2024-07-14 01:20:28.398377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:38.991 [2024-07-14 01:20:28.398519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:38.991 [2024-07-14 01:20:28.398552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:38.991 [2024-07-14 01:20:28.398581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:38.991 [2024-07-14 01:20:28.398605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:38.991 [2024-07-14 01:20:28.398635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:38.991 qpair failed and we were unable to recover it. 00:34:39.250 [2024-07-14 01:20:28.408413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.250 [2024-07-14 01:20:28.408565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.250 [2024-07-14 01:20:28.408590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.250 [2024-07-14 01:20:28.408604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.250 [2024-07-14 01:20:28.408617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.250 [2024-07-14 01:20:28.408645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.250 qpair failed and we were unable to recover it. 00:34:39.250 [2024-07-14 01:20:28.418437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.250 [2024-07-14 01:20:28.418588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.250 [2024-07-14 01:20:28.418613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.250 [2024-07-14 01:20:28.418627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.250 [2024-07-14 01:20:28.418640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.250 [2024-07-14 01:20:28.418668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.250 qpair failed and we were unable to recover it. 00:34:39.250 [2024-07-14 01:20:28.428467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.250 [2024-07-14 01:20:28.428629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.250 [2024-07-14 01:20:28.428655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.250 [2024-07-14 01:20:28.428669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.251 [2024-07-14 01:20:28.428682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.251 [2024-07-14 01:20:28.428710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.251 qpair failed and we were unable to recover it. 00:34:39.251 [2024-07-14 01:20:28.438510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.251 [2024-07-14 01:20:28.438660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.251 [2024-07-14 01:20:28.438685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.251 [2024-07-14 01:20:28.438700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.251 [2024-07-14 01:20:28.438713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.251 [2024-07-14 01:20:28.438746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.251 qpair failed and we were unable to recover it. 00:34:39.251 [2024-07-14 01:20:28.448558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.251 [2024-07-14 01:20:28.448727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.251 [2024-07-14 01:20:28.448752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.251 [2024-07-14 01:20:28.448766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.251 [2024-07-14 01:20:28.448780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.251 [2024-07-14 01:20:28.448808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.251 qpair failed and we were unable to recover it. 00:34:39.251 [2024-07-14 01:20:28.458582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.251 [2024-07-14 01:20:28.458767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.251 [2024-07-14 01:20:28.458792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.251 [2024-07-14 01:20:28.458806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.251 [2024-07-14 01:20:28.458819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.251 [2024-07-14 01:20:28.458846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.251 qpair failed and we were unable to recover it. 00:34:39.251 [2024-07-14 01:20:28.468606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.251 [2024-07-14 01:20:28.468753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.251 [2024-07-14 01:20:28.468779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.251 [2024-07-14 01:20:28.468793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.251 [2024-07-14 01:20:28.468806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.251 [2024-07-14 01:20:28.468834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.251 qpair failed and we were unable to recover it. 00:34:39.251 [2024-07-14 01:20:28.478618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.251 [2024-07-14 01:20:28.478765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.251 [2024-07-14 01:20:28.478790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.251 [2024-07-14 01:20:28.478804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.251 [2024-07-14 01:20:28.478817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.251 [2024-07-14 01:20:28.478845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.251 qpair failed and we were unable to recover it. 00:34:39.251 [2024-07-14 01:20:28.488710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.251 [2024-07-14 01:20:28.488863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.251 [2024-07-14 01:20:28.488902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.251 [2024-07-14 01:20:28.488917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.251 [2024-07-14 01:20:28.488930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.251 [2024-07-14 01:20:28.488959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.251 qpair failed and we were unable to recover it. 00:34:39.251 [2024-07-14 01:20:28.498678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.251 [2024-07-14 01:20:28.498826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.251 [2024-07-14 01:20:28.498851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.251 [2024-07-14 01:20:28.498873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.251 [2024-07-14 01:20:28.498887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.251 [2024-07-14 01:20:28.498915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.251 qpair failed and we were unable to recover it. 00:34:39.251 [2024-07-14 01:20:28.508715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.251 [2024-07-14 01:20:28.508861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.251 [2024-07-14 01:20:28.508893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.251 [2024-07-14 01:20:28.508908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.251 [2024-07-14 01:20:28.508921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.251 [2024-07-14 01:20:28.508948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.251 qpair failed and we were unable to recover it. 00:34:39.251 [2024-07-14 01:20:28.518742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.251 [2024-07-14 01:20:28.518899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.251 [2024-07-14 01:20:28.518924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.251 [2024-07-14 01:20:28.518938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.251 [2024-07-14 01:20:28.518950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.251 [2024-07-14 01:20:28.518977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.251 qpair failed and we were unable to recover it. 00:34:39.251 [2024-07-14 01:20:28.528769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.251 [2024-07-14 01:20:28.528922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.251 [2024-07-14 01:20:28.528947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.251 [2024-07-14 01:20:28.528961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.251 [2024-07-14 01:20:28.528979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.251 [2024-07-14 01:20:28.529006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.251 qpair failed and we were unable to recover it. 00:34:39.251 [2024-07-14 01:20:28.538811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.251 [2024-07-14 01:20:28.538963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.251 [2024-07-14 01:20:28.538988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.251 [2024-07-14 01:20:28.539002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.251 [2024-07-14 01:20:28.539014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.251 [2024-07-14 01:20:28.539042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.251 qpair failed and we were unable to recover it. 00:34:39.251 [2024-07-14 01:20:28.548815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.251 [2024-07-14 01:20:28.548968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.251 [2024-07-14 01:20:28.548993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.251 [2024-07-14 01:20:28.549006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.251 [2024-07-14 01:20:28.549019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.251 [2024-07-14 01:20:28.549046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.251 qpair failed and we were unable to recover it. 00:34:39.251 [2024-07-14 01:20:28.558837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.251 [2024-07-14 01:20:28.558988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.251 [2024-07-14 01:20:28.559013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.251 [2024-07-14 01:20:28.559027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.251 [2024-07-14 01:20:28.559040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.251 [2024-07-14 01:20:28.559067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.251 qpair failed and we were unable to recover it. 00:34:39.251 [2024-07-14 01:20:28.568885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.251 [2024-07-14 01:20:28.569037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.251 [2024-07-14 01:20:28.569062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.251 [2024-07-14 01:20:28.569076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.251 [2024-07-14 01:20:28.569088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.251 [2024-07-14 01:20:28.569116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.251 qpair failed and we were unable to recover it. 00:34:39.252 [2024-07-14 01:20:28.579032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.252 [2024-07-14 01:20:28.579186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.252 [2024-07-14 01:20:28.579211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.252 [2024-07-14 01:20:28.579225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.252 [2024-07-14 01:20:28.579238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.252 [2024-07-14 01:20:28.579265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.252 qpair failed and we were unable to recover it. 00:34:39.252 [2024-07-14 01:20:28.588948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.252 [2024-07-14 01:20:28.589099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.252 [2024-07-14 01:20:28.589124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.252 [2024-07-14 01:20:28.589139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.252 [2024-07-14 01:20:28.589152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.252 [2024-07-14 01:20:28.589179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.252 qpair failed and we were unable to recover it. 00:34:39.252 [2024-07-14 01:20:28.598981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.252 [2024-07-14 01:20:28.599138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.252 [2024-07-14 01:20:28.599163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.252 [2024-07-14 01:20:28.599177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.252 [2024-07-14 01:20:28.599190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.252 [2024-07-14 01:20:28.599217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.252 qpair failed and we were unable to recover it. 00:34:39.252 [2024-07-14 01:20:28.609101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.252 [2024-07-14 01:20:28.609258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.252 [2024-07-14 01:20:28.609282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.252 [2024-07-14 01:20:28.609296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.252 [2024-07-14 01:20:28.609309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.252 [2024-07-14 01:20:28.609336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.252 qpair failed and we were unable to recover it. 00:34:39.252 [2024-07-14 01:20:28.619044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.252 [2024-07-14 01:20:28.619203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.252 [2024-07-14 01:20:28.619228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.252 [2024-07-14 01:20:28.619241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.252 [2024-07-14 01:20:28.619260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.252 [2024-07-14 01:20:28.619288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.252 qpair failed and we were unable to recover it. 00:34:39.252 [2024-07-14 01:20:28.629060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.252 [2024-07-14 01:20:28.629203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.252 [2024-07-14 01:20:28.629227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.252 [2024-07-14 01:20:28.629241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.252 [2024-07-14 01:20:28.629254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.252 [2024-07-14 01:20:28.629281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.252 qpair failed and we were unable to recover it. 00:34:39.252 [2024-07-14 01:20:28.639114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.252 [2024-07-14 01:20:28.639294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.252 [2024-07-14 01:20:28.639318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.252 [2024-07-14 01:20:28.639333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.252 [2024-07-14 01:20:28.639346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.252 [2024-07-14 01:20:28.639372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.252 qpair failed and we were unable to recover it. 00:34:39.252 [2024-07-14 01:20:28.649118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.252 [2024-07-14 01:20:28.649317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.252 [2024-07-14 01:20:28.649342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.252 [2024-07-14 01:20:28.649355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.252 [2024-07-14 01:20:28.649368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.252 [2024-07-14 01:20:28.649395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.252 qpair failed and we were unable to recover it. 00:34:39.252 [2024-07-14 01:20:28.659152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.252 [2024-07-14 01:20:28.659309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.252 [2024-07-14 01:20:28.659345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.252 [2024-07-14 01:20:28.659366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.252 [2024-07-14 01:20:28.659380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.252 [2024-07-14 01:20:28.659410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.252 qpair failed and we were unable to recover it. 00:34:39.511 [2024-07-14 01:20:28.669188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.511 [2024-07-14 01:20:28.669334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.511 [2024-07-14 01:20:28.669360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.511 [2024-07-14 01:20:28.669374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.511 [2024-07-14 01:20:28.669387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.511 [2024-07-14 01:20:28.669416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.511 qpair failed and we were unable to recover it. 00:34:39.511 [2024-07-14 01:20:28.679205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.511 [2024-07-14 01:20:28.679345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.511 [2024-07-14 01:20:28.679370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.511 [2024-07-14 01:20:28.679384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.511 [2024-07-14 01:20:28.679397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.511 [2024-07-14 01:20:28.679425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.511 qpair failed and we were unable to recover it. 00:34:39.511 [2024-07-14 01:20:28.689268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.511 [2024-07-14 01:20:28.689419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.511 [2024-07-14 01:20:28.689444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.511 [2024-07-14 01:20:28.689458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.511 [2024-07-14 01:20:28.689472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.511 [2024-07-14 01:20:28.689500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.511 qpair failed and we were unable to recover it. 00:34:39.511 [2024-07-14 01:20:28.699335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.511 [2024-07-14 01:20:28.699494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.511 [2024-07-14 01:20:28.699520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.511 [2024-07-14 01:20:28.699534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.511 [2024-07-14 01:20:28.699547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.511 [2024-07-14 01:20:28.699574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.511 qpair failed and we were unable to recover it. 00:34:39.511 [2024-07-14 01:20:28.709274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.511 [2024-07-14 01:20:28.709421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.511 [2024-07-14 01:20:28.709446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.511 [2024-07-14 01:20:28.709460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.511 [2024-07-14 01:20:28.709479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.511 [2024-07-14 01:20:28.709507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.511 qpair failed and we were unable to recover it. 00:34:39.511 [2024-07-14 01:20:28.719326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.511 [2024-07-14 01:20:28.719473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.511 [2024-07-14 01:20:28.719498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.511 [2024-07-14 01:20:28.719513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.511 [2024-07-14 01:20:28.719526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.511 [2024-07-14 01:20:28.719554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.511 qpair failed and we were unable to recover it. 00:34:39.511 [2024-07-14 01:20:28.729352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.511 [2024-07-14 01:20:28.729498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.511 [2024-07-14 01:20:28.729523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.511 [2024-07-14 01:20:28.729537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.511 [2024-07-14 01:20:28.729550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.511 [2024-07-14 01:20:28.729577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.511 qpair failed and we were unable to recover it. 00:34:39.511 [2024-07-14 01:20:28.739369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.511 [2024-07-14 01:20:28.739520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.511 [2024-07-14 01:20:28.739545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.511 [2024-07-14 01:20:28.739559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.511 [2024-07-14 01:20:28.739572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.511 [2024-07-14 01:20:28.739599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.511 qpair failed and we were unable to recover it. 00:34:39.511 [2024-07-14 01:20:28.749390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.511 [2024-07-14 01:20:28.749534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.511 [2024-07-14 01:20:28.749559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.511 [2024-07-14 01:20:28.749573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.511 [2024-07-14 01:20:28.749587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.511 [2024-07-14 01:20:28.749613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.511 qpair failed and we were unable to recover it. 00:34:39.511 [2024-07-14 01:20:28.759447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.511 [2024-07-14 01:20:28.759596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.511 [2024-07-14 01:20:28.759621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.511 [2024-07-14 01:20:28.759635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.511 [2024-07-14 01:20:28.759648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.511 [2024-07-14 01:20:28.759675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.511 qpair failed and we were unable to recover it. 00:34:39.511 [2024-07-14 01:20:28.769510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.511 [2024-07-14 01:20:28.769701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.511 [2024-07-14 01:20:28.769726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.511 [2024-07-14 01:20:28.769740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.511 [2024-07-14 01:20:28.769753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.511 [2024-07-14 01:20:28.769780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.511 qpair failed and we were unable to recover it. 00:34:39.511 [2024-07-14 01:20:28.779514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.511 [2024-07-14 01:20:28.779660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.511 [2024-07-14 01:20:28.779684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.511 [2024-07-14 01:20:28.779698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.511 [2024-07-14 01:20:28.779711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.512 [2024-07-14 01:20:28.779738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.512 qpair failed and we were unable to recover it. 00:34:39.512 [2024-07-14 01:20:28.789490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.512 [2024-07-14 01:20:28.789635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.512 [2024-07-14 01:20:28.789660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.512 [2024-07-14 01:20:28.789674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.512 [2024-07-14 01:20:28.789686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.512 [2024-07-14 01:20:28.789713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.512 qpair failed and we were unable to recover it. 00:34:39.512 [2024-07-14 01:20:28.799601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.512 [2024-07-14 01:20:28.799748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.512 [2024-07-14 01:20:28.799774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.512 [2024-07-14 01:20:28.799794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.512 [2024-07-14 01:20:28.799808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.512 [2024-07-14 01:20:28.799836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.512 qpair failed and we were unable to recover it. 00:34:39.512 [2024-07-14 01:20:28.809585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.512 [2024-07-14 01:20:28.809733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.512 [2024-07-14 01:20:28.809757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.512 [2024-07-14 01:20:28.809771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.512 [2024-07-14 01:20:28.809784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.512 [2024-07-14 01:20:28.809811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.512 qpair failed and we were unable to recover it. 00:34:39.512 [2024-07-14 01:20:28.819593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.512 [2024-07-14 01:20:28.819739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.512 [2024-07-14 01:20:28.819764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.512 [2024-07-14 01:20:28.819778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.512 [2024-07-14 01:20:28.819791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.512 [2024-07-14 01:20:28.819819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.512 qpair failed and we were unable to recover it. 00:34:39.512 [2024-07-14 01:20:28.829636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.512 [2024-07-14 01:20:28.829799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.512 [2024-07-14 01:20:28.829824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.512 [2024-07-14 01:20:28.829838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.512 [2024-07-14 01:20:28.829851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.512 [2024-07-14 01:20:28.829887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.512 qpair failed and we were unable to recover it. 00:34:39.512 [2024-07-14 01:20:28.839675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.512 [2024-07-14 01:20:28.839823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.512 [2024-07-14 01:20:28.839848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.512 [2024-07-14 01:20:28.839862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.512 [2024-07-14 01:20:28.839882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.512 [2024-07-14 01:20:28.839910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.512 qpair failed and we were unable to recover it. 00:34:39.512 [2024-07-14 01:20:28.849708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.512 [2024-07-14 01:20:28.849863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.512 [2024-07-14 01:20:28.849894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.512 [2024-07-14 01:20:28.849909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.512 [2024-07-14 01:20:28.849922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.512 [2024-07-14 01:20:28.849949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.512 qpair failed and we were unable to recover it. 00:34:39.512 [2024-07-14 01:20:28.859801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.512 [2024-07-14 01:20:28.859950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.512 [2024-07-14 01:20:28.859975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.512 [2024-07-14 01:20:28.859990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.512 [2024-07-14 01:20:28.860003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.512 [2024-07-14 01:20:28.860030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.512 qpair failed and we were unable to recover it. 00:34:39.512 [2024-07-14 01:20:28.869737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.512 [2024-07-14 01:20:28.869892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.512 [2024-07-14 01:20:28.869917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.512 [2024-07-14 01:20:28.869931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.512 [2024-07-14 01:20:28.869944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.512 [2024-07-14 01:20:28.869973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.512 qpair failed and we were unable to recover it. 00:34:39.512 [2024-07-14 01:20:28.879791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.512 [2024-07-14 01:20:28.879949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.512 [2024-07-14 01:20:28.879974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.512 [2024-07-14 01:20:28.879988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.512 [2024-07-14 01:20:28.880000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.512 [2024-07-14 01:20:28.880028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.512 qpair failed and we were unable to recover it. 00:34:39.512 [2024-07-14 01:20:28.889811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.512 [2024-07-14 01:20:28.889961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.512 [2024-07-14 01:20:28.889985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.512 [2024-07-14 01:20:28.890005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.512 [2024-07-14 01:20:28.890019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.512 [2024-07-14 01:20:28.890046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.512 qpair failed and we were unable to recover it. 00:34:39.512 [2024-07-14 01:20:28.899842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.512 [2024-07-14 01:20:28.899997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.512 [2024-07-14 01:20:28.900022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.512 [2024-07-14 01:20:28.900036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.512 [2024-07-14 01:20:28.900050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.512 [2024-07-14 01:20:28.900077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.512 qpair failed and we were unable to recover it. 00:34:39.512 [2024-07-14 01:20:28.909898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.512 [2024-07-14 01:20:28.910046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.512 [2024-07-14 01:20:28.910071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.512 [2024-07-14 01:20:28.910085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.512 [2024-07-14 01:20:28.910098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.512 [2024-07-14 01:20:28.910125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.512 qpair failed and we were unable to recover it. 00:34:39.512 [2024-07-14 01:20:28.919887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.512 [2024-07-14 01:20:28.920038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.512 [2024-07-14 01:20:28.920068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.512 [2024-07-14 01:20:28.920083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.512 [2024-07-14 01:20:28.920096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.512 [2024-07-14 01:20:28.920125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.512 qpair failed and we were unable to recover it. 00:34:39.772 [2024-07-14 01:20:28.929911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.772 [2024-07-14 01:20:28.930062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.772 [2024-07-14 01:20:28.930087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.772 [2024-07-14 01:20:28.930102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.772 [2024-07-14 01:20:28.930115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.772 [2024-07-14 01:20:28.930142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.772 qpair failed and we were unable to recover it. 00:34:39.772 [2024-07-14 01:20:28.939962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.772 [2024-07-14 01:20:28.940109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.772 [2024-07-14 01:20:28.940134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.772 [2024-07-14 01:20:28.940148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.772 [2024-07-14 01:20:28.940161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.772 [2024-07-14 01:20:28.940189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.772 qpair failed and we were unable to recover it. 00:34:39.772 [2024-07-14 01:20:28.949998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.772 [2024-07-14 01:20:28.950184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.772 [2024-07-14 01:20:28.950209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.772 [2024-07-14 01:20:28.950223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.772 [2024-07-14 01:20:28.950236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.772 [2024-07-14 01:20:28.950264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.772 qpair failed and we were unable to recover it. 00:34:39.772 [2024-07-14 01:20:28.959986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.772 [2024-07-14 01:20:28.960128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.772 [2024-07-14 01:20:28.960152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.772 [2024-07-14 01:20:28.960167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.772 [2024-07-14 01:20:28.960180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.772 [2024-07-14 01:20:28.960207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.772 qpair failed and we were unable to recover it. 00:34:39.772 [2024-07-14 01:20:28.970050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.772 [2024-07-14 01:20:28.970239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.772 [2024-07-14 01:20:28.970264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.772 [2024-07-14 01:20:28.970278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.772 [2024-07-14 01:20:28.970291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.772 [2024-07-14 01:20:28.970318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.772 qpair failed and we were unable to recover it. 00:34:39.772 [2024-07-14 01:20:28.980085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.772 [2024-07-14 01:20:28.980236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.772 [2024-07-14 01:20:28.980265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.772 [2024-07-14 01:20:28.980281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.772 [2024-07-14 01:20:28.980294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.772 [2024-07-14 01:20:28.980321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.772 qpair failed and we were unable to recover it. 00:34:39.772 [2024-07-14 01:20:28.990148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.772 [2024-07-14 01:20:28.990343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.772 [2024-07-14 01:20:28.990368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.772 [2024-07-14 01:20:28.990382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.772 [2024-07-14 01:20:28.990395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.772 [2024-07-14 01:20:28.990422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.772 qpair failed and we were unable to recover it. 00:34:39.772 [2024-07-14 01:20:29.000112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.772 [2024-07-14 01:20:29.000278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.772 [2024-07-14 01:20:29.000303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.772 [2024-07-14 01:20:29.000317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.772 [2024-07-14 01:20:29.000330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.772 [2024-07-14 01:20:29.000357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.772 qpair failed and we were unable to recover it. 00:34:39.772 [2024-07-14 01:20:29.010148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.772 [2024-07-14 01:20:29.010341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.772 [2024-07-14 01:20:29.010365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.772 [2024-07-14 01:20:29.010379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.772 [2024-07-14 01:20:29.010392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.772 [2024-07-14 01:20:29.010419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.773 qpair failed and we were unable to recover it. 00:34:39.773 [2024-07-14 01:20:29.020199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.773 [2024-07-14 01:20:29.020342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.773 [2024-07-14 01:20:29.020367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.773 [2024-07-14 01:20:29.020381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.773 [2024-07-14 01:20:29.020393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.773 [2024-07-14 01:20:29.020420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.773 qpair failed and we were unable to recover it. 00:34:39.773 [2024-07-14 01:20:29.030237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.773 [2024-07-14 01:20:29.030380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.773 [2024-07-14 01:20:29.030405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.773 [2024-07-14 01:20:29.030419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.773 [2024-07-14 01:20:29.030432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.773 [2024-07-14 01:20:29.030459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.773 qpair failed and we were unable to recover it. 00:34:39.773 [2024-07-14 01:20:29.040248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.773 [2024-07-14 01:20:29.040388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.773 [2024-07-14 01:20:29.040413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.773 [2024-07-14 01:20:29.040427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.773 [2024-07-14 01:20:29.040439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.773 [2024-07-14 01:20:29.040467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.773 qpair failed and we were unable to recover it. 00:34:39.773 [2024-07-14 01:20:29.050348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.773 [2024-07-14 01:20:29.050497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.773 [2024-07-14 01:20:29.050521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.773 [2024-07-14 01:20:29.050535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.773 [2024-07-14 01:20:29.050548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.773 [2024-07-14 01:20:29.050577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.773 qpair failed and we were unable to recover it. 00:34:39.773 [2024-07-14 01:20:29.060300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.773 [2024-07-14 01:20:29.060447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.773 [2024-07-14 01:20:29.060472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.773 [2024-07-14 01:20:29.060486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.773 [2024-07-14 01:20:29.060499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.773 [2024-07-14 01:20:29.060526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.773 qpair failed and we were unable to recover it. 00:34:39.773 [2024-07-14 01:20:29.070387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.773 [2024-07-14 01:20:29.070536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.773 [2024-07-14 01:20:29.070566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.773 [2024-07-14 01:20:29.070580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.773 [2024-07-14 01:20:29.070593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.773 [2024-07-14 01:20:29.070621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.773 qpair failed and we were unable to recover it. 00:34:39.773 [2024-07-14 01:20:29.080367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.773 [2024-07-14 01:20:29.080510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.773 [2024-07-14 01:20:29.080535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.773 [2024-07-14 01:20:29.080548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.773 [2024-07-14 01:20:29.080561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.773 [2024-07-14 01:20:29.080589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.773 qpair failed and we were unable to recover it. 00:34:39.773 [2024-07-14 01:20:29.090398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.773 [2024-07-14 01:20:29.090543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.773 [2024-07-14 01:20:29.090568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.773 [2024-07-14 01:20:29.090582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.773 [2024-07-14 01:20:29.090595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.773 [2024-07-14 01:20:29.090623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.773 qpair failed and we were unable to recover it. 00:34:39.773 [2024-07-14 01:20:29.100435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.773 [2024-07-14 01:20:29.100620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.773 [2024-07-14 01:20:29.100646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.773 [2024-07-14 01:20:29.100660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.773 [2024-07-14 01:20:29.100673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.773 [2024-07-14 01:20:29.100700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.773 qpair failed and we were unable to recover it. 00:34:39.773 [2024-07-14 01:20:29.110430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.773 [2024-07-14 01:20:29.110575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.773 [2024-07-14 01:20:29.110600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.773 [2024-07-14 01:20:29.110614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.773 [2024-07-14 01:20:29.110627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.773 [2024-07-14 01:20:29.110661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.773 qpair failed and we were unable to recover it. 00:34:39.773 [2024-07-14 01:20:29.120459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.773 [2024-07-14 01:20:29.120608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.773 [2024-07-14 01:20:29.120633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.773 [2024-07-14 01:20:29.120647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.773 [2024-07-14 01:20:29.120660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.773 [2024-07-14 01:20:29.120687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.773 qpair failed and we were unable to recover it. 00:34:39.773 [2024-07-14 01:20:29.130528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.773 [2024-07-14 01:20:29.130723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.773 [2024-07-14 01:20:29.130748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.773 [2024-07-14 01:20:29.130762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.773 [2024-07-14 01:20:29.130775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.773 [2024-07-14 01:20:29.130803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.773 qpair failed and we were unable to recover it. 00:34:39.773 [2024-07-14 01:20:29.140527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.773 [2024-07-14 01:20:29.140702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.773 [2024-07-14 01:20:29.140727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.773 [2024-07-14 01:20:29.140741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.773 [2024-07-14 01:20:29.140754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.773 [2024-07-14 01:20:29.140781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.773 qpair failed and we were unable to recover it. 00:34:39.773 [2024-07-14 01:20:29.150553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.773 [2024-07-14 01:20:29.150728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.773 [2024-07-14 01:20:29.150753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.773 [2024-07-14 01:20:29.150767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.773 [2024-07-14 01:20:29.150780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.773 [2024-07-14 01:20:29.150807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.773 qpair failed and we were unable to recover it. 00:34:39.773 [2024-07-14 01:20:29.160584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.774 [2024-07-14 01:20:29.160728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.774 [2024-07-14 01:20:29.160758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.774 [2024-07-14 01:20:29.160773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.774 [2024-07-14 01:20:29.160785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.774 [2024-07-14 01:20:29.160812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.774 qpair failed and we were unable to recover it. 00:34:39.774 [2024-07-14 01:20:29.170643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.774 [2024-07-14 01:20:29.170793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.774 [2024-07-14 01:20:29.170818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.774 [2024-07-14 01:20:29.170832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.774 [2024-07-14 01:20:29.170845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.774 [2024-07-14 01:20:29.170878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.774 qpair failed and we were unable to recover it. 00:34:39.774 [2024-07-14 01:20:29.180637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:39.774 [2024-07-14 01:20:29.180781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:39.774 [2024-07-14 01:20:29.180807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:39.774 [2024-07-14 01:20:29.180821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:39.774 [2024-07-14 01:20:29.180834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:39.774 [2024-07-14 01:20:29.180862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:39.774 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 01:20:29.190652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.033 [2024-07-14 01:20:29.190792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.033 [2024-07-14 01:20:29.190818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.033 [2024-07-14 01:20:29.190832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.033 [2024-07-14 01:20:29.190844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.033 [2024-07-14 01:20:29.190879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 01:20:29.200679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.033 [2024-07-14 01:20:29.200835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.033 [2024-07-14 01:20:29.200861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.033 [2024-07-14 01:20:29.200883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.033 [2024-07-14 01:20:29.200896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.033 [2024-07-14 01:20:29.200929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 01:20:29.210732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.033 [2024-07-14 01:20:29.210889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.033 [2024-07-14 01:20:29.210915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.033 [2024-07-14 01:20:29.210929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.033 [2024-07-14 01:20:29.210941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.033 [2024-07-14 01:20:29.210969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 01:20:29.220723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.033 [2024-07-14 01:20:29.220872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.033 [2024-07-14 01:20:29.220898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.033 [2024-07-14 01:20:29.220912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.033 [2024-07-14 01:20:29.220925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.033 [2024-07-14 01:20:29.220952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 01:20:29.230762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.033 [2024-07-14 01:20:29.230938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.033 [2024-07-14 01:20:29.230963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.033 [2024-07-14 01:20:29.230977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.033 [2024-07-14 01:20:29.230990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.033 [2024-07-14 01:20:29.231018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 01:20:29.240892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.033 [2024-07-14 01:20:29.241032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.033 [2024-07-14 01:20:29.241057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.033 [2024-07-14 01:20:29.241071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.033 [2024-07-14 01:20:29.241083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.033 [2024-07-14 01:20:29.241110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 01:20:29.250856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.033 [2024-07-14 01:20:29.251040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.033 [2024-07-14 01:20:29.251072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.033 [2024-07-14 01:20:29.251087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.033 [2024-07-14 01:20:29.251099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.033 [2024-07-14 01:20:29.251126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 01:20:29.260857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.033 [2024-07-14 01:20:29.261011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.033 [2024-07-14 01:20:29.261037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.033 [2024-07-14 01:20:29.261051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.033 [2024-07-14 01:20:29.261064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.033 [2024-07-14 01:20:29.261091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 01:20:29.270900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.033 [2024-07-14 01:20:29.271043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.033 [2024-07-14 01:20:29.271068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.033 [2024-07-14 01:20:29.271082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.033 [2024-07-14 01:20:29.271095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.033 [2024-07-14 01:20:29.271122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.033 qpair failed and we were unable to recover it. 00:34:40.033 [2024-07-14 01:20:29.280921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.033 [2024-07-14 01:20:29.281074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.033 [2024-07-14 01:20:29.281100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.034 [2024-07-14 01:20:29.281115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.034 [2024-07-14 01:20:29.281128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.034 [2024-07-14 01:20:29.281156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 01:20:29.290944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.034 [2024-07-14 01:20:29.291094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.034 [2024-07-14 01:20:29.291119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.034 [2024-07-14 01:20:29.291133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.034 [2024-07-14 01:20:29.291151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.034 [2024-07-14 01:20:29.291179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 01:20:29.301004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.034 [2024-07-14 01:20:29.301160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.034 [2024-07-14 01:20:29.301185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.034 [2024-07-14 01:20:29.301199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.034 [2024-07-14 01:20:29.301212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.034 [2024-07-14 01:20:29.301241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 01:20:29.311025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.034 [2024-07-14 01:20:29.311166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.034 [2024-07-14 01:20:29.311190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.034 [2024-07-14 01:20:29.311204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.034 [2024-07-14 01:20:29.311218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.034 [2024-07-14 01:20:29.311245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 01:20:29.321041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.034 [2024-07-14 01:20:29.321182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.034 [2024-07-14 01:20:29.321206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.034 [2024-07-14 01:20:29.321220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.034 [2024-07-14 01:20:29.321233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.034 [2024-07-14 01:20:29.321260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 01:20:29.331053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.034 [2024-07-14 01:20:29.331208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.034 [2024-07-14 01:20:29.331233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.034 [2024-07-14 01:20:29.331247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.034 [2024-07-14 01:20:29.331260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.034 [2024-07-14 01:20:29.331287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 01:20:29.341119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.034 [2024-07-14 01:20:29.341271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.034 [2024-07-14 01:20:29.341297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.034 [2024-07-14 01:20:29.341311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.034 [2024-07-14 01:20:29.341324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.034 [2024-07-14 01:20:29.341351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 01:20:29.351115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.034 [2024-07-14 01:20:29.351258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.034 [2024-07-14 01:20:29.351284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.034 [2024-07-14 01:20:29.351298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.034 [2024-07-14 01:20:29.351311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.034 [2024-07-14 01:20:29.351338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 01:20:29.361186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.034 [2024-07-14 01:20:29.361348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.034 [2024-07-14 01:20:29.361372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.034 [2024-07-14 01:20:29.361386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.034 [2024-07-14 01:20:29.361399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.034 [2024-07-14 01:20:29.361427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 01:20:29.371211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.034 [2024-07-14 01:20:29.371393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.034 [2024-07-14 01:20:29.371418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.034 [2024-07-14 01:20:29.371432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.034 [2024-07-14 01:20:29.371445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.034 [2024-07-14 01:20:29.371472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 01:20:29.381216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.034 [2024-07-14 01:20:29.381390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.034 [2024-07-14 01:20:29.381415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.034 [2024-07-14 01:20:29.381429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.034 [2024-07-14 01:20:29.381447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.034 [2024-07-14 01:20:29.381475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 01:20:29.391225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.034 [2024-07-14 01:20:29.391368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.034 [2024-07-14 01:20:29.391394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.034 [2024-07-14 01:20:29.391408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.034 [2024-07-14 01:20:29.391421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.034 [2024-07-14 01:20:29.391448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 01:20:29.401263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.034 [2024-07-14 01:20:29.401404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.034 [2024-07-14 01:20:29.401429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.034 [2024-07-14 01:20:29.401443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.034 [2024-07-14 01:20:29.401455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.034 [2024-07-14 01:20:29.401482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 01:20:29.411286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.034 [2024-07-14 01:20:29.411441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.034 [2024-07-14 01:20:29.411467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.034 [2024-07-14 01:20:29.411481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.034 [2024-07-14 01:20:29.411494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.034 [2024-07-14 01:20:29.411522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.034 qpair failed and we were unable to recover it. 00:34:40.034 [2024-07-14 01:20:29.421317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.034 [2024-07-14 01:20:29.421471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.034 [2024-07-14 01:20:29.421496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.034 [2024-07-14 01:20:29.421510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.034 [2024-07-14 01:20:29.421523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.035 [2024-07-14 01:20:29.421550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 01:20:29.431395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.035 [2024-07-14 01:20:29.431546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.035 [2024-07-14 01:20:29.431572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.035 [2024-07-14 01:20:29.431585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.035 [2024-07-14 01:20:29.431598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.035 [2024-07-14 01:20:29.431625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.035 [2024-07-14 01:20:29.441391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.035 [2024-07-14 01:20:29.441537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.035 [2024-07-14 01:20:29.441562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.035 [2024-07-14 01:20:29.441576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.035 [2024-07-14 01:20:29.441588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.035 [2024-07-14 01:20:29.441615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.035 qpair failed and we were unable to recover it. 00:34:40.296 [2024-07-14 01:20:29.451399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.296 [2024-07-14 01:20:29.451568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.296 [2024-07-14 01:20:29.451594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.296 [2024-07-14 01:20:29.451608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.296 [2024-07-14 01:20:29.451621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.296 [2024-07-14 01:20:29.451650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.296 qpair failed and we were unable to recover it. 00:34:40.296 [2024-07-14 01:20:29.461446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.296 [2024-07-14 01:20:29.461604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.296 [2024-07-14 01:20:29.461630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.296 [2024-07-14 01:20:29.461644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.296 [2024-07-14 01:20:29.461657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.296 [2024-07-14 01:20:29.461684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.296 qpair failed and we were unable to recover it. 00:34:40.296 [2024-07-14 01:20:29.471487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.296 [2024-07-14 01:20:29.471631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.296 [2024-07-14 01:20:29.471656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.296 [2024-07-14 01:20:29.471671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.296 [2024-07-14 01:20:29.471689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.296 [2024-07-14 01:20:29.471717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.296 qpair failed and we were unable to recover it. 00:34:40.296 [2024-07-14 01:20:29.481476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.296 [2024-07-14 01:20:29.481621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.296 [2024-07-14 01:20:29.481646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.296 [2024-07-14 01:20:29.481660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.296 [2024-07-14 01:20:29.481673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.296 [2024-07-14 01:20:29.481700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.296 qpair failed and we were unable to recover it. 00:34:40.296 [2024-07-14 01:20:29.491543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.296 [2024-07-14 01:20:29.491688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.296 [2024-07-14 01:20:29.491713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.296 [2024-07-14 01:20:29.491727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.296 [2024-07-14 01:20:29.491739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.296 [2024-07-14 01:20:29.491767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.296 qpair failed and we were unable to recover it. 00:34:40.296 [2024-07-14 01:20:29.501541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.296 [2024-07-14 01:20:29.501686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.296 [2024-07-14 01:20:29.501711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.296 [2024-07-14 01:20:29.501725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.296 [2024-07-14 01:20:29.501738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.296 [2024-07-14 01:20:29.501764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.296 qpair failed and we were unable to recover it. 00:34:40.296 [2024-07-14 01:20:29.511561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.296 [2024-07-14 01:20:29.511708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.296 [2024-07-14 01:20:29.511733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.296 [2024-07-14 01:20:29.511747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.296 [2024-07-14 01:20:29.511760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.296 [2024-07-14 01:20:29.511788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.296 qpair failed and we were unable to recover it. 00:34:40.296 [2024-07-14 01:20:29.521616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.296 [2024-07-14 01:20:29.521765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.296 [2024-07-14 01:20:29.521790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.296 [2024-07-14 01:20:29.521805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.296 [2024-07-14 01:20:29.521818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.296 [2024-07-14 01:20:29.521846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.296 qpair failed and we were unable to recover it. 00:34:40.296 [2024-07-14 01:20:29.531643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.296 [2024-07-14 01:20:29.531792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.296 [2024-07-14 01:20:29.531817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.297 [2024-07-14 01:20:29.531831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.297 [2024-07-14 01:20:29.531844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.297 [2024-07-14 01:20:29.531887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.297 qpair failed and we were unable to recover it. 00:34:40.297 [2024-07-14 01:20:29.541680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.297 [2024-07-14 01:20:29.541882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.297 [2024-07-14 01:20:29.541907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.297 [2024-07-14 01:20:29.541921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.297 [2024-07-14 01:20:29.541935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.297 [2024-07-14 01:20:29.541962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.297 qpair failed and we were unable to recover it. 00:34:40.297 [2024-07-14 01:20:29.551717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.297 [2024-07-14 01:20:29.551857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.297 [2024-07-14 01:20:29.551889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.297 [2024-07-14 01:20:29.551904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.297 [2024-07-14 01:20:29.551915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.297 [2024-07-14 01:20:29.551944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.297 qpair failed and we were unable to recover it. 00:34:40.297 [2024-07-14 01:20:29.561723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.297 [2024-07-14 01:20:29.561895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.297 [2024-07-14 01:20:29.561921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.297 [2024-07-14 01:20:29.561941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.297 [2024-07-14 01:20:29.561954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.297 [2024-07-14 01:20:29.561983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.297 qpair failed and we were unable to recover it. 00:34:40.297 [2024-07-14 01:20:29.571783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.297 [2024-07-14 01:20:29.571934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.297 [2024-07-14 01:20:29.571959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.297 [2024-07-14 01:20:29.571973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.297 [2024-07-14 01:20:29.571987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.297 [2024-07-14 01:20:29.572015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.297 qpair failed and we were unable to recover it. 00:34:40.297 [2024-07-14 01:20:29.581781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.297 [2024-07-14 01:20:29.581940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.297 [2024-07-14 01:20:29.581966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.297 [2024-07-14 01:20:29.581980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.297 [2024-07-14 01:20:29.581992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.297 [2024-07-14 01:20:29.582021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.297 qpair failed and we were unable to recover it. 00:34:40.297 [2024-07-14 01:20:29.591796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.297 [2024-07-14 01:20:29.591950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.297 [2024-07-14 01:20:29.591976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.297 [2024-07-14 01:20:29.591990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.297 [2024-07-14 01:20:29.592003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.297 [2024-07-14 01:20:29.592031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.297 qpair failed and we were unable to recover it. 00:34:40.297 [2024-07-14 01:20:29.601814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.297 [2024-07-14 01:20:29.601970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.297 [2024-07-14 01:20:29.601996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.297 [2024-07-14 01:20:29.602010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.297 [2024-07-14 01:20:29.602023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.297 [2024-07-14 01:20:29.602051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.297 qpair failed and we were unable to recover it. 00:34:40.297 [2024-07-14 01:20:29.611876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.297 [2024-07-14 01:20:29.612067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.297 [2024-07-14 01:20:29.612092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.297 [2024-07-14 01:20:29.612106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.297 [2024-07-14 01:20:29.612119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.297 [2024-07-14 01:20:29.612148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.297 qpair failed and we were unable to recover it. 00:34:40.297 [2024-07-14 01:20:29.621940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.297 [2024-07-14 01:20:29.622080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.297 [2024-07-14 01:20:29.622104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.297 [2024-07-14 01:20:29.622118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.297 [2024-07-14 01:20:29.622131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.297 [2024-07-14 01:20:29.622158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.297 qpair failed and we were unable to recover it. 00:34:40.297 [2024-07-14 01:20:29.631929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.297 [2024-07-14 01:20:29.632079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.297 [2024-07-14 01:20:29.632104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.297 [2024-07-14 01:20:29.632118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.297 [2024-07-14 01:20:29.632131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.297 [2024-07-14 01:20:29.632158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.297 qpair failed and we were unable to recover it. 00:34:40.297 [2024-07-14 01:20:29.641955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.297 [2024-07-14 01:20:29.642099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.297 [2024-07-14 01:20:29.642124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.297 [2024-07-14 01:20:29.642138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.297 [2024-07-14 01:20:29.642151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.297 [2024-07-14 01:20:29.642178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.297 qpair failed and we were unable to recover it. 00:34:40.297 [2024-07-14 01:20:29.651976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.297 [2024-07-14 01:20:29.652126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.297 [2024-07-14 01:20:29.652151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.297 [2024-07-14 01:20:29.652171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.297 [2024-07-14 01:20:29.652185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.297 [2024-07-14 01:20:29.652213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.297 qpair failed and we were unable to recover it. 00:34:40.297 [2024-07-14 01:20:29.662005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.297 [2024-07-14 01:20:29.662153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.297 [2024-07-14 01:20:29.662178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.297 [2024-07-14 01:20:29.662192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.297 [2024-07-14 01:20:29.662205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.297 [2024-07-14 01:20:29.662233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.297 qpair failed and we were unable to recover it. 00:34:40.297 [2024-07-14 01:20:29.672016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.297 [2024-07-14 01:20:29.672161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.297 [2024-07-14 01:20:29.672186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.297 [2024-07-14 01:20:29.672200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.297 [2024-07-14 01:20:29.672213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.298 [2024-07-14 01:20:29.672243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.298 qpair failed and we were unable to recover it. 00:34:40.298 [2024-07-14 01:20:29.682059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.298 [2024-07-14 01:20:29.682242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.298 [2024-07-14 01:20:29.682266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.298 [2024-07-14 01:20:29.682280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.298 [2024-07-14 01:20:29.682293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.298 [2024-07-14 01:20:29.682320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.298 qpair failed and we were unable to recover it. 00:34:40.298 [2024-07-14 01:20:29.692133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.298 [2024-07-14 01:20:29.692305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.298 [2024-07-14 01:20:29.692330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.298 [2024-07-14 01:20:29.692344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.298 [2024-07-14 01:20:29.692363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.298 [2024-07-14 01:20:29.692390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.298 qpair failed and we were unable to recover it. 00:34:40.298 [2024-07-14 01:20:29.702121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.298 [2024-07-14 01:20:29.702271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.298 [2024-07-14 01:20:29.702296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.298 [2024-07-14 01:20:29.702311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.298 [2024-07-14 01:20:29.702323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.298 [2024-07-14 01:20:29.702350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.298 qpair failed and we were unable to recover it. 00:34:40.559 [2024-07-14 01:20:29.712167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.559 [2024-07-14 01:20:29.712318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.559 [2024-07-14 01:20:29.712344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.559 [2024-07-14 01:20:29.712358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.559 [2024-07-14 01:20:29.712371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.559 [2024-07-14 01:20:29.712400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.559 qpair failed and we were unable to recover it. 00:34:40.559 [2024-07-14 01:20:29.722194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.559 [2024-07-14 01:20:29.722342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.559 [2024-07-14 01:20:29.722367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.559 [2024-07-14 01:20:29.722381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.559 [2024-07-14 01:20:29.722394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.559 [2024-07-14 01:20:29.722421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.559 qpair failed and we were unable to recover it. 00:34:40.559 [2024-07-14 01:20:29.732223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.559 [2024-07-14 01:20:29.732377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.559 [2024-07-14 01:20:29.732401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.559 [2024-07-14 01:20:29.732415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.559 [2024-07-14 01:20:29.732428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.559 [2024-07-14 01:20:29.732456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.559 qpair failed and we were unable to recover it. 00:34:40.559 [2024-07-14 01:20:29.742242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.559 [2024-07-14 01:20:29.742390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.559 [2024-07-14 01:20:29.742415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.559 [2024-07-14 01:20:29.742436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.559 [2024-07-14 01:20:29.742449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.559 [2024-07-14 01:20:29.742477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.559 qpair failed and we were unable to recover it. 00:34:40.559 [2024-07-14 01:20:29.752265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.559 [2024-07-14 01:20:29.752412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.559 [2024-07-14 01:20:29.752437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.559 [2024-07-14 01:20:29.752451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.559 [2024-07-14 01:20:29.752464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.559 [2024-07-14 01:20:29.752491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.559 qpair failed and we were unable to recover it. 00:34:40.559 [2024-07-14 01:20:29.762316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.559 [2024-07-14 01:20:29.762481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.559 [2024-07-14 01:20:29.762506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.559 [2024-07-14 01:20:29.762520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.559 [2024-07-14 01:20:29.762533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.559 [2024-07-14 01:20:29.762560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.559 qpair failed and we were unable to recover it. 00:34:40.559 [2024-07-14 01:20:29.772446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.559 [2024-07-14 01:20:29.772625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.559 [2024-07-14 01:20:29.772649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.559 [2024-07-14 01:20:29.772663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.559 [2024-07-14 01:20:29.772676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.559 [2024-07-14 01:20:29.772705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.559 qpair failed and we were unable to recover it. 00:34:40.559 [2024-07-14 01:20:29.782392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.559 [2024-07-14 01:20:29.782571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.559 [2024-07-14 01:20:29.782596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.559 [2024-07-14 01:20:29.782610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.559 [2024-07-14 01:20:29.782623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.559 [2024-07-14 01:20:29.782650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.559 qpair failed and we were unable to recover it. 00:34:40.559 [2024-07-14 01:20:29.792376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.559 [2024-07-14 01:20:29.792523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.559 [2024-07-14 01:20:29.792548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.559 [2024-07-14 01:20:29.792562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.559 [2024-07-14 01:20:29.792575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.559 [2024-07-14 01:20:29.792602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.559 qpair failed and we were unable to recover it. 00:34:40.559 [2024-07-14 01:20:29.802437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.559 [2024-07-14 01:20:29.802583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.559 [2024-07-14 01:20:29.802608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.559 [2024-07-14 01:20:29.802622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.559 [2024-07-14 01:20:29.802635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.559 [2024-07-14 01:20:29.802662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.559 qpair failed and we were unable to recover it. 00:34:40.559 [2024-07-14 01:20:29.812507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.559 [2024-07-14 01:20:29.812683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.559 [2024-07-14 01:20:29.812709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.559 [2024-07-14 01:20:29.812723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.559 [2024-07-14 01:20:29.812736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.559 [2024-07-14 01:20:29.812763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.559 qpair failed and we were unable to recover it. 00:34:40.559 [2024-07-14 01:20:29.822473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.559 [2024-07-14 01:20:29.822619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.559 [2024-07-14 01:20:29.822644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.559 [2024-07-14 01:20:29.822658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.559 [2024-07-14 01:20:29.822671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.559 [2024-07-14 01:20:29.822698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.559 qpair failed and we were unable to recover it. 00:34:40.559 [2024-07-14 01:20:29.832485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.559 [2024-07-14 01:20:29.832627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.559 [2024-07-14 01:20:29.832657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.559 [2024-07-14 01:20:29.832673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.560 [2024-07-14 01:20:29.832686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.560 [2024-07-14 01:20:29.832713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.560 qpair failed and we were unable to recover it. 00:34:40.560 [2024-07-14 01:20:29.842523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.560 [2024-07-14 01:20:29.842665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.560 [2024-07-14 01:20:29.842691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.560 [2024-07-14 01:20:29.842705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.560 [2024-07-14 01:20:29.842717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.560 [2024-07-14 01:20:29.842744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.560 qpair failed and we were unable to recover it. 00:34:40.560 [2024-07-14 01:20:29.852549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.560 [2024-07-14 01:20:29.852698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.560 [2024-07-14 01:20:29.852723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.560 [2024-07-14 01:20:29.852736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.560 [2024-07-14 01:20:29.852750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.560 [2024-07-14 01:20:29.852777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.560 qpair failed and we were unable to recover it. 00:34:40.560 [2024-07-14 01:20:29.862628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.560 [2024-07-14 01:20:29.862804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.560 [2024-07-14 01:20:29.862828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.560 [2024-07-14 01:20:29.862842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.560 [2024-07-14 01:20:29.862855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.560 [2024-07-14 01:20:29.862889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.560 qpair failed and we were unable to recover it. 00:34:40.560 [2024-07-14 01:20:29.872606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.560 [2024-07-14 01:20:29.872747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.560 [2024-07-14 01:20:29.872772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.560 [2024-07-14 01:20:29.872785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.560 [2024-07-14 01:20:29.872799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.560 [2024-07-14 01:20:29.872832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.560 qpair failed and we were unable to recover it. 00:34:40.560 [2024-07-14 01:20:29.882638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.560 [2024-07-14 01:20:29.882801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.560 [2024-07-14 01:20:29.882826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.560 [2024-07-14 01:20:29.882840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.560 [2024-07-14 01:20:29.882853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.560 [2024-07-14 01:20:29.882886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.560 qpair failed and we were unable to recover it. 00:34:40.560 [2024-07-14 01:20:29.892766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.560 [2024-07-14 01:20:29.892937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.560 [2024-07-14 01:20:29.892962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.560 [2024-07-14 01:20:29.892976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.560 [2024-07-14 01:20:29.892988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.560 [2024-07-14 01:20:29.893016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.560 qpair failed and we were unable to recover it. 00:34:40.560 [2024-07-14 01:20:29.902686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.560 [2024-07-14 01:20:29.902830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.560 [2024-07-14 01:20:29.902854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.560 [2024-07-14 01:20:29.902874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.560 [2024-07-14 01:20:29.902889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.560 [2024-07-14 01:20:29.902916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.560 qpair failed and we were unable to recover it. 00:34:40.560 [2024-07-14 01:20:29.912708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.560 [2024-07-14 01:20:29.912849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.560 [2024-07-14 01:20:29.912881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.560 [2024-07-14 01:20:29.912896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.560 [2024-07-14 01:20:29.912909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.560 [2024-07-14 01:20:29.912936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.560 qpair failed and we were unable to recover it. 00:34:40.560 [2024-07-14 01:20:29.922764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.560 [2024-07-14 01:20:29.922929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.560 [2024-07-14 01:20:29.922960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.560 [2024-07-14 01:20:29.922974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.560 [2024-07-14 01:20:29.922986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.560 [2024-07-14 01:20:29.923014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.560 qpair failed and we were unable to recover it. 00:34:40.560 [2024-07-14 01:20:29.932789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.560 [2024-07-14 01:20:29.932943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.560 [2024-07-14 01:20:29.932968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.560 [2024-07-14 01:20:29.932983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.560 [2024-07-14 01:20:29.932995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.560 [2024-07-14 01:20:29.933023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.560 qpair failed and we were unable to recover it. 00:34:40.560 [2024-07-14 01:20:29.942816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.560 [2024-07-14 01:20:29.942972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.560 [2024-07-14 01:20:29.942998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.560 [2024-07-14 01:20:29.943012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.560 [2024-07-14 01:20:29.943024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.560 [2024-07-14 01:20:29.943051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.560 qpair failed and we were unable to recover it. 00:34:40.560 [2024-07-14 01:20:29.952838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.560 [2024-07-14 01:20:29.952993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.560 [2024-07-14 01:20:29.953018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.560 [2024-07-14 01:20:29.953032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.560 [2024-07-14 01:20:29.953045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.560 [2024-07-14 01:20:29.953072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.560 qpair failed and we were unable to recover it. 00:34:40.560 [2024-07-14 01:20:29.962899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.560 [2024-07-14 01:20:29.963046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.560 [2024-07-14 01:20:29.963071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.560 [2024-07-14 01:20:29.963085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.560 [2024-07-14 01:20:29.963097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.560 [2024-07-14 01:20:29.963131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.560 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-14 01:20:29.972905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.822 [2024-07-14 01:20:29.973060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.822 [2024-07-14 01:20:29.973086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.822 [2024-07-14 01:20:29.973100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.822 [2024-07-14 01:20:29.973113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.822 [2024-07-14 01:20:29.973140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-14 01:20:29.982947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.822 [2024-07-14 01:20:29.983098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.822 [2024-07-14 01:20:29.983125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.822 [2024-07-14 01:20:29.983139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.822 [2024-07-14 01:20:29.983152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.822 [2024-07-14 01:20:29.983180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-14 01:20:29.992997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.822 [2024-07-14 01:20:29.993148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.822 [2024-07-14 01:20:29.993173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.822 [2024-07-14 01:20:29.993187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.822 [2024-07-14 01:20:29.993200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.822 [2024-07-14 01:20:29.993227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-14 01:20:30.002970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.822 [2024-07-14 01:20:30.003111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.822 [2024-07-14 01:20:30.003136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.822 [2024-07-14 01:20:30.003151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.822 [2024-07-14 01:20:30.003164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.822 [2024-07-14 01:20:30.003191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-14 01:20:30.013046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.822 [2024-07-14 01:20:30.013203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.822 [2024-07-14 01:20:30.013241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.822 [2024-07-14 01:20:30.013258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.822 [2024-07-14 01:20:30.013271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.822 [2024-07-14 01:20:30.013300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-14 01:20:30.023058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.822 [2024-07-14 01:20:30.023205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.822 [2024-07-14 01:20:30.023231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.822 [2024-07-14 01:20:30.023245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.822 [2024-07-14 01:20:30.023259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.822 [2024-07-14 01:20:30.023286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-14 01:20:30.033076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.822 [2024-07-14 01:20:30.033220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.822 [2024-07-14 01:20:30.033246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.822 [2024-07-14 01:20:30.033261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.822 [2024-07-14 01:20:30.033275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.822 [2024-07-14 01:20:30.033304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.822 [2024-07-14 01:20:30.043186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.822 [2024-07-14 01:20:30.043355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.822 [2024-07-14 01:20:30.043381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.822 [2024-07-14 01:20:30.043396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.822 [2024-07-14 01:20:30.043409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.822 [2024-07-14 01:20:30.043437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.822 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-14 01:20:30.053144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.823 [2024-07-14 01:20:30.053313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.823 [2024-07-14 01:20:30.053338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.823 [2024-07-14 01:20:30.053352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.823 [2024-07-14 01:20:30.053365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.823 [2024-07-14 01:20:30.053398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-14 01:20:30.063195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.823 [2024-07-14 01:20:30.063346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.823 [2024-07-14 01:20:30.063372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.823 [2024-07-14 01:20:30.063387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.823 [2024-07-14 01:20:30.063399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.823 [2024-07-14 01:20:30.063426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-14 01:20:30.073221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.823 [2024-07-14 01:20:30.073368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.823 [2024-07-14 01:20:30.073394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.823 [2024-07-14 01:20:30.073408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.823 [2024-07-14 01:20:30.073420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.823 [2024-07-14 01:20:30.073447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-14 01:20:30.083233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.823 [2024-07-14 01:20:30.083386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.823 [2024-07-14 01:20:30.083411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.823 [2024-07-14 01:20:30.083425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.823 [2024-07-14 01:20:30.083438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.823 [2024-07-14 01:20:30.083466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-14 01:20:30.093241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.823 [2024-07-14 01:20:30.093396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.823 [2024-07-14 01:20:30.093421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.823 [2024-07-14 01:20:30.093435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.823 [2024-07-14 01:20:30.093448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.823 [2024-07-14 01:20:30.093475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-14 01:20:30.103316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.823 [2024-07-14 01:20:30.103475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.823 [2024-07-14 01:20:30.103506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.823 [2024-07-14 01:20:30.103521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.823 [2024-07-14 01:20:30.103534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.823 [2024-07-14 01:20:30.103561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-14 01:20:30.113268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.823 [2024-07-14 01:20:30.113414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.823 [2024-07-14 01:20:30.113440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.823 [2024-07-14 01:20:30.113454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.823 [2024-07-14 01:20:30.113466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.823 [2024-07-14 01:20:30.113496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-14 01:20:30.123342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.823 [2024-07-14 01:20:30.123503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.823 [2024-07-14 01:20:30.123528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.823 [2024-07-14 01:20:30.123544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.823 [2024-07-14 01:20:30.123556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.823 [2024-07-14 01:20:30.123583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-14 01:20:30.133342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.823 [2024-07-14 01:20:30.133498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.823 [2024-07-14 01:20:30.133523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.823 [2024-07-14 01:20:30.133538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.823 [2024-07-14 01:20:30.133551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.823 [2024-07-14 01:20:30.133578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-14 01:20:30.143368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.823 [2024-07-14 01:20:30.143532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.823 [2024-07-14 01:20:30.143557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.823 [2024-07-14 01:20:30.143571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.823 [2024-07-14 01:20:30.143590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.823 [2024-07-14 01:20:30.143618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-14 01:20:30.153385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.823 [2024-07-14 01:20:30.153530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.823 [2024-07-14 01:20:30.153555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.823 [2024-07-14 01:20:30.153569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.823 [2024-07-14 01:20:30.153582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.823 [2024-07-14 01:20:30.153609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-14 01:20:30.163448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.823 [2024-07-14 01:20:30.163619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.823 [2024-07-14 01:20:30.163644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.823 [2024-07-14 01:20:30.163658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.823 [2024-07-14 01:20:30.163671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.823 [2024-07-14 01:20:30.163698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-14 01:20:30.173521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.823 [2024-07-14 01:20:30.173674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.823 [2024-07-14 01:20:30.173699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.823 [2024-07-14 01:20:30.173713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.823 [2024-07-14 01:20:30.173725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.823 [2024-07-14 01:20:30.173753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-14 01:20:30.183478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.823 [2024-07-14 01:20:30.183629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.823 [2024-07-14 01:20:30.183654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.823 [2024-07-14 01:20:30.183668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.823 [2024-07-14 01:20:30.183680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.823 [2024-07-14 01:20:30.183708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.823 qpair failed and we were unable to recover it. 00:34:40.823 [2024-07-14 01:20:30.193553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.823 [2024-07-14 01:20:30.193705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.824 [2024-07-14 01:20:30.193730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.824 [2024-07-14 01:20:30.193744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.824 [2024-07-14 01:20:30.193757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.824 [2024-07-14 01:20:30.193785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-14 01:20:30.203549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.824 [2024-07-14 01:20:30.203698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.824 [2024-07-14 01:20:30.203723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.824 [2024-07-14 01:20:30.203737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.824 [2024-07-14 01:20:30.203749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.824 [2024-07-14 01:20:30.203776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-14 01:20:30.213585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.824 [2024-07-14 01:20:30.213778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.824 [2024-07-14 01:20:30.213804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.824 [2024-07-14 01:20:30.213818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.824 [2024-07-14 01:20:30.213831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.824 [2024-07-14 01:20:30.213858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-14 01:20:30.223609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.824 [2024-07-14 01:20:30.223756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.824 [2024-07-14 01:20:30.223782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.824 [2024-07-14 01:20:30.223796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.824 [2024-07-14 01:20:30.223809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.824 [2024-07-14 01:20:30.223836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.824 qpair failed and we were unable to recover it. 00:34:40.824 [2024-07-14 01:20:30.233631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:40.824 [2024-07-14 01:20:30.233785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:40.824 [2024-07-14 01:20:30.233811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:40.824 [2024-07-14 01:20:30.233826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:40.824 [2024-07-14 01:20:30.233844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:40.824 [2024-07-14 01:20:30.233897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:40.824 qpair failed and we were unable to recover it. 00:34:41.082 [2024-07-14 01:20:30.243657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.082 [2024-07-14 01:20:30.243803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.082 [2024-07-14 01:20:30.243829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.082 [2024-07-14 01:20:30.243843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.082 [2024-07-14 01:20:30.243855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:41.082 [2024-07-14 01:20:30.243889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:41.082 qpair failed and we were unable to recover it. 00:34:41.082 [2024-07-14 01:20:30.253666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.082 [2024-07-14 01:20:30.253810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.082 [2024-07-14 01:20:30.253833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.082 [2024-07-14 01:20:30.253847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.083 [2024-07-14 01:20:30.253858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:41.083 [2024-07-14 01:20:30.253893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:41.083 qpair failed and we were unable to recover it. 00:34:41.083 [2024-07-14 01:20:30.263697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.083 [2024-07-14 01:20:30.263844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.083 [2024-07-14 01:20:30.263874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.083 [2024-07-14 01:20:30.263889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.083 [2024-07-14 01:20:30.263903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:41.083 [2024-07-14 01:20:30.263930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:41.083 qpair failed and we were unable to recover it. 00:34:41.083 [2024-07-14 01:20:30.273737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.083 [2024-07-14 01:20:30.273899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.083 [2024-07-14 01:20:30.273925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.083 [2024-07-14 01:20:30.273939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.083 [2024-07-14 01:20:30.273951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:41.083 [2024-07-14 01:20:30.273979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:41.083 qpair failed and we were unable to recover it. 00:34:41.083 [2024-07-14 01:20:30.283773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.083 [2024-07-14 01:20:30.283947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.083 [2024-07-14 01:20:30.283972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.083 [2024-07-14 01:20:30.283986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.083 [2024-07-14 01:20:30.283998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:41.083 [2024-07-14 01:20:30.284025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:41.083 qpair failed and we were unable to recover it. 00:34:41.083 [2024-07-14 01:20:30.293804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.083 [2024-07-14 01:20:30.293963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.083 [2024-07-14 01:20:30.293988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.083 [2024-07-14 01:20:30.294002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.083 [2024-07-14 01:20:30.294015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:41.083 [2024-07-14 01:20:30.294042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:41.083 qpair failed and we were unable to recover it. 00:34:41.083 [2024-07-14 01:20:30.303859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.083 [2024-07-14 01:20:30.304042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.083 [2024-07-14 01:20:30.304067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.083 [2024-07-14 01:20:30.304081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.083 [2024-07-14 01:20:30.304094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:41.083 [2024-07-14 01:20:30.304122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:41.083 qpair failed and we were unable to recover it. 00:34:41.083 [2024-07-14 01:20:30.313844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.083 [2024-07-14 01:20:30.313999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.083 [2024-07-14 01:20:30.314025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.083 [2024-07-14 01:20:30.314039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.083 [2024-07-14 01:20:30.314051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:41.083 [2024-07-14 01:20:30.314079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:41.083 qpair failed and we were unable to recover it. 00:34:41.083 [2024-07-14 01:20:30.323857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.083 [2024-07-14 01:20:30.324002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.083 [2024-07-14 01:20:30.324027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.083 [2024-07-14 01:20:30.324047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.083 [2024-07-14 01:20:30.324061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:41.083 [2024-07-14 01:20:30.324090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:41.083 qpair failed and we were unable to recover it. 00:34:41.083 [2024-07-14 01:20:30.333939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.083 [2024-07-14 01:20:30.334095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.083 [2024-07-14 01:20:30.334122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.083 [2024-07-14 01:20:30.334136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.083 [2024-07-14 01:20:30.334148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:41.083 [2024-07-14 01:20:30.334175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:41.083 qpair failed and we were unable to recover it. 00:34:41.083 [2024-07-14 01:20:30.343932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.083 [2024-07-14 01:20:30.344089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.083 [2024-07-14 01:20:30.344114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.083 [2024-07-14 01:20:30.344128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.083 [2024-07-14 01:20:30.344141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:41.083 [2024-07-14 01:20:30.344168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:41.083 qpair failed and we were unable to recover it. 00:34:41.083 [2024-07-14 01:20:30.353967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.083 [2024-07-14 01:20:30.354165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.083 [2024-07-14 01:20:30.354189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.083 [2024-07-14 01:20:30.354203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.083 [2024-07-14 01:20:30.354216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:41.083 [2024-07-14 01:20:30.354243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:41.083 qpair failed and we were unable to recover it. 00:34:41.083 [2024-07-14 01:20:30.363990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:41.083 [2024-07-14 01:20:30.364135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:41.083 [2024-07-14 01:20:30.364160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:41.083 [2024-07-14 01:20:30.364174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:41.083 [2024-07-14 01:20:30.364187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11c3600 00:34:41.083 [2024-07-14 01:20:30.364214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:41.083 qpair failed and we were unable to recover it. 00:34:41.083 [2024-07-14 01:20:30.364359] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:41.083 A controller has encountered a failure and is being reset. 00:34:41.083 [2024-07-14 01:20:30.364419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d15b0 (9): Bad file descriptor 00:34:41.083 Controller properly reset. 00:34:41.083 Initializing NVMe Controllers 00:34:41.083 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:41.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:41.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:41.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:41.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:41.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:41.083 Initialization complete. Launching workers. 00:34:41.083 Starting thread on core 1 00:34:41.083 Starting thread on core 2 00:34:41.083 Starting thread on core 3 00:34:41.083 Starting thread on core 0 00:34:41.083 01:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:41.083 00:34:41.083 real 0m10.666s 00:34:41.083 user 0m17.373s 00:34:41.083 sys 0m5.766s 00:34:41.083 01:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:41.083 01:20:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:41.083 ************************************ 00:34:41.083 END TEST nvmf_target_disconnect_tc2 00:34:41.083 ************************************ 00:34:41.083 01:20:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:41.083 01:20:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:41.083 01:20:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:41.083 01:20:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:41.083 01:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:41.083 01:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:41.083 01:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:41.084 01:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:41.084 01:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:41.084 01:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:41.084 rmmod nvme_tcp 00:34:41.084 rmmod nvme_fabrics 00:34:41.084 rmmod nvme_keyring 00:34:41.084 01:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:41.343 01:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:41.343 01:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:41.343 01:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1301138 ']' 00:34:41.343 01:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1301138 00:34:41.343 01:20:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1301138 ']' 00:34:41.343 01:20:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1301138 00:34:41.343 01:20:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:34:41.343 01:20:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:41.343 01:20:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1301138 00:34:41.343 01:20:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:34:41.343 01:20:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:34:41.343 01:20:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1301138' 00:34:41.343 killing process with pid 1301138 00:34:41.343 01:20:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1301138 00:34:41.343 01:20:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1301138 00:34:41.601 01:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:41.601 01:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:41.601 01:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:41.601 01:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:41.601 01:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:41.601 01:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.601 01:20:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:41.601 01:20:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.505 01:20:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:43.505 00:34:43.505 real 0m15.285s 00:34:43.505 user 0m43.048s 00:34:43.505 sys 0m7.655s 00:34:43.505 01:20:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:43.505 01:20:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:43.505 ************************************ 00:34:43.505 END TEST nvmf_target_disconnect 00:34:43.505 ************************************ 00:34:43.505 01:20:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:43.505 01:20:32 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:43.505 01:20:32 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:43.505 01:20:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:43.505 01:20:32 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:43.505 00:34:43.505 real 27m3.531s 00:34:43.505 user 73m25.321s 00:34:43.505 sys 6m27.960s 00:34:43.505 01:20:32 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:43.505 01:20:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:43.505 ************************************ 00:34:43.505 END TEST nvmf_tcp 00:34:43.505 ************************************ 00:34:43.505 01:20:32 -- common/autotest_common.sh@1142 -- # return 0 00:34:43.505 01:20:32 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:43.505 01:20:32 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:43.505 01:20:32 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:43.505 01:20:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:43.505 01:20:32 -- common/autotest_common.sh@10 -- # set +x 00:34:43.505 ************************************ 00:34:43.505 START TEST spdkcli_nvmf_tcp 00:34:43.505 ************************************ 00:34:43.505 01:20:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:43.764 * Looking for test storage... 00:34:43.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1302325 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1302325 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1302325 ']' 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:43.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:43.764 01:20:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:43.764 [2024-07-14 01:20:33.011025] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:43.764 [2024-07-14 01:20:33.011122] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302325 ] 00:34:43.764 EAL: No free 2048 kB hugepages reported on node 1 00:34:43.764 [2024-07-14 01:20:33.069453] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:43.764 [2024-07-14 01:20:33.155332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:43.764 [2024-07-14 01:20:33.155336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.022 01:20:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:44.022 01:20:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:34:44.022 01:20:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:44.022 01:20:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:44.022 01:20:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:44.022 01:20:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:44.022 01:20:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:44.022 01:20:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:44.022 01:20:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:44.022 01:20:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:44.022 01:20:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:44.022 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:44.022 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:44.022 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:44.022 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:44.022 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:44.022 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:44.022 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:44.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:44.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:44.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:44.022 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:44.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:44.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:44.022 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:44.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:44.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:44.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:44.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:44.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:44.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:44.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:44.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:44.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:44.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:44.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:44.022 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:44.022 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:44.022 ' 00:34:46.552 [2024-07-14 01:20:35.804029] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:47.929 [2024-07-14 01:20:37.044402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:50.477 [2024-07-14 01:20:39.331557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:52.378 [2024-07-14 01:20:41.301955] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:53.754 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:53.754 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:53.754 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:53.754 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:53.754 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:53.754 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:53.754 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:53.754 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:53.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:53.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:53.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:53.754 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:53.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:53.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:53.754 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:53.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:53.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:53.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:53.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:53.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:53.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:53.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:53.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:53.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:53.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:53.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:53.754 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:53.754 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:53.754 01:20:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:53.754 01:20:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:53.754 01:20:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:53.754 01:20:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:53.754 01:20:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:53.754 01:20:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:53.754 01:20:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:53.754 01:20:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:54.018 01:20:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:54.018 01:20:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:54.018 01:20:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:54.018 01:20:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:54.018 01:20:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.279 01:20:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:54.279 01:20:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:54.279 01:20:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.279 01:20:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:54.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:54.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:54.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:54.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:54.279 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:54.279 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:54.279 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:54.279 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:54.279 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:54.279 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:54.279 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:54.279 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:54.279 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:54.279 ' 00:34:59.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:59.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:59.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:59.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:59.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:59.551 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:59.551 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:59.551 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:59.551 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:59.551 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:59.551 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:59.551 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:59.551 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:59.551 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1302325 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1302325 ']' 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1302325 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1302325 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1302325' 00:34:59.551 killing process with pid 1302325 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1302325 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1302325 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1302325 ']' 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1302325 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1302325 ']' 00:34:59.551 01:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1302325 00:34:59.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1302325) - No such process 00:34:59.809 01:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1302325 is not found' 00:34:59.809 Process with pid 1302325 is not found 00:34:59.809 01:20:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:59.809 01:20:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:59.809 01:20:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:59.809 00:34:59.809 real 0m16.064s 00:34:59.809 user 0m34.093s 00:34:59.809 sys 0m0.820s 00:34:59.809 01:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:59.809 01:20:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:59.809 ************************************ 00:34:59.809 END TEST spdkcli_nvmf_tcp 00:34:59.809 ************************************ 00:34:59.809 01:20:48 -- common/autotest_common.sh@1142 -- # return 0 00:34:59.809 01:20:48 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:59.809 01:20:48 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:59.809 01:20:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:59.809 01:20:48 -- common/autotest_common.sh@10 -- # set +x 00:34:59.809 ************************************ 00:34:59.809 START TEST nvmf_identify_passthru 00:34:59.809 ************************************ 00:34:59.809 01:20:49 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:59.809 * Looking for test storage... 00:34:59.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:59.809 01:20:49 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:59.809 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:59.809 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:59.809 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:59.809 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:59.809 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:59.809 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:59.809 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:59.809 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:59.809 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:59.809 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:59.809 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:59.809 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:59.809 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:59.809 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:59.809 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:59.809 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:59.809 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:59.809 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:59.809 01:20:49 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:59.809 01:20:49 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:59.809 01:20:49 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:59.809 01:20:49 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.810 01:20:49 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.810 01:20:49 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.810 01:20:49 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:59.810 01:20:49 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.810 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:59.810 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:59.810 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:59.810 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:59.810 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:59.810 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:59.810 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:59.810 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:59.810 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:59.810 01:20:49 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:59.810 01:20:49 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:59.810 01:20:49 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:59.810 01:20:49 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:59.810 01:20:49 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.810 01:20:49 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.810 01:20:49 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.810 01:20:49 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:59.810 01:20:49 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.810 01:20:49 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:59.810 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:59.810 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:59.810 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:59.810 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:59.810 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:59.810 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.810 01:20:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:59.810 01:20:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.810 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:59.810 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:59.810 01:20:49 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:59.810 01:20:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:01.717 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:01.717 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:01.717 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:01.717 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:01.718 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:01.718 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:01.977 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:01.977 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:01.977 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:01.977 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:01.977 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:01.977 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:01.977 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:01.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:01.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:35:01.977 00:35:01.977 --- 10.0.0.2 ping statistics --- 00:35:01.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.977 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:35:01.977 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:01.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:01.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:35:01.977 00:35:01.977 --- 10.0.0.1 ping statistics --- 00:35:01.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.977 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:35:01.977 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:01.977 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:35:01.977 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:01.977 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:01.977 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:01.977 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:01.977 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:01.977 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:01.977 01:20:51 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:01.977 01:20:51 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:01.977 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:01.977 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:01.977 01:20:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:01.977 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:35:01.977 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:35:01.977 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:35:01.977 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:35:01.977 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:35:01.977 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:35:01.977 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:01.977 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:01.977 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:35:01.977 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:35:01.977 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:35:01.977 01:20:51 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:35:01.977 01:20:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:35:01.977 01:20:51 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:35:01.977 01:20:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:01.977 01:20:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:01.977 01:20:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:01.977 EAL: No free 2048 kB hugepages reported on node 1 00:35:06.227 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:35:06.227 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:06.227 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:06.227 01:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:06.227 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.424 01:20:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:10.424 01:20:59 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:10.424 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:10.424 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.424 01:20:59 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:10.424 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:10.424 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.424 01:20:59 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1306831 00:35:10.424 01:20:59 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:10.424 01:20:59 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:10.424 01:20:59 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1306831 00:35:10.424 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1306831 ']' 00:35:10.424 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:10.424 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:10.424 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:10.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:10.424 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:10.424 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.424 [2024-07-14 01:20:59.757099] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:35:10.424 [2024-07-14 01:20:59.757178] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:10.424 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.424 [2024-07-14 01:20:59.826317] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:10.682 [2024-07-14 01:20:59.919315] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:10.682 [2024-07-14 01:20:59.919378] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:10.682 [2024-07-14 01:20:59.919394] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:10.682 [2024-07-14 01:20:59.919407] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:10.682 [2024-07-14 01:20:59.919419] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:10.682 [2024-07-14 01:20:59.919475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:10.682 [2024-07-14 01:20:59.919506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:10.682 [2024-07-14 01:20:59.919785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:10.682 [2024-07-14 01:20:59.919788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.682 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:10.682 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:35:10.682 01:20:59 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:10.682 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.682 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.682 INFO: Log level set to 20 00:35:10.682 INFO: Requests: 00:35:10.682 { 00:35:10.682 "jsonrpc": "2.0", 00:35:10.682 "method": "nvmf_set_config", 00:35:10.682 "id": 1, 00:35:10.682 "params": { 00:35:10.682 "admin_cmd_passthru": { 00:35:10.682 "identify_ctrlr": true 00:35:10.682 } 00:35:10.682 } 00:35:10.682 } 00:35:10.682 00:35:10.682 INFO: response: 00:35:10.682 { 00:35:10.682 "jsonrpc": "2.0", 00:35:10.682 "id": 1, 00:35:10.682 "result": true 00:35:10.682 } 00:35:10.682 00:35:10.682 01:20:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.682 01:21:00 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:10.682 01:21:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.682 01:21:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.682 INFO: Setting log level to 20 00:35:10.682 INFO: Setting log level to 20 00:35:10.682 INFO: Log level set to 20 00:35:10.682 INFO: Log level set to 20 00:35:10.682 INFO: Requests: 00:35:10.682 { 00:35:10.682 "jsonrpc": "2.0", 00:35:10.682 "method": "framework_start_init", 00:35:10.682 "id": 1 00:35:10.682 } 00:35:10.682 00:35:10.682 INFO: Requests: 00:35:10.682 { 00:35:10.682 "jsonrpc": "2.0", 00:35:10.682 "method": "framework_start_init", 00:35:10.682 "id": 1 00:35:10.682 } 00:35:10.682 00:35:10.940 [2024-07-14 01:21:00.099317] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:10.940 INFO: response: 00:35:10.940 { 00:35:10.940 "jsonrpc": "2.0", 00:35:10.940 "id": 1, 00:35:10.940 "result": true 00:35:10.940 } 00:35:10.940 00:35:10.940 INFO: response: 00:35:10.940 { 00:35:10.940 "jsonrpc": "2.0", 00:35:10.940 "id": 1, 00:35:10.940 "result": true 00:35:10.940 } 00:35:10.940 00:35:10.940 01:21:00 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.940 01:21:00 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:10.940 01:21:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.940 01:21:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.940 INFO: Setting log level to 40 00:35:10.940 INFO: Setting log level to 40 00:35:10.940 INFO: Setting log level to 40 00:35:10.940 [2024-07-14 01:21:00.109395] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:10.940 01:21:00 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.940 01:21:00 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:10.940 01:21:00 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:10.940 01:21:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:10.940 01:21:00 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:35:10.940 01:21:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.940 01:21:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.221 Nvme0n1 00:35:14.221 01:21:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.221 01:21:02 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:14.221 01:21:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.221 01:21:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.221 01:21:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.222 01:21:02 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:14.222 01:21:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.222 01:21:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.222 01:21:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.222 01:21:02 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:14.222 01:21:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.222 01:21:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.222 [2024-07-14 01:21:03.000691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:14.222 01:21:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.222 01:21:03 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:14.222 01:21:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.222 01:21:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.222 [ 00:35:14.222 { 00:35:14.222 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:14.222 "subtype": "Discovery", 00:35:14.222 "listen_addresses": [], 00:35:14.222 "allow_any_host": true, 00:35:14.222 "hosts": [] 00:35:14.222 }, 00:35:14.222 { 00:35:14.222 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:14.222 "subtype": "NVMe", 00:35:14.222 "listen_addresses": [ 00:35:14.222 { 00:35:14.222 "trtype": "TCP", 00:35:14.222 "adrfam": "IPv4", 00:35:14.222 "traddr": "10.0.0.2", 00:35:14.222 "trsvcid": "4420" 00:35:14.222 } 00:35:14.222 ], 00:35:14.222 "allow_any_host": true, 00:35:14.222 "hosts": [], 00:35:14.222 "serial_number": "SPDK00000000000001", 00:35:14.222 "model_number": "SPDK bdev Controller", 00:35:14.222 "max_namespaces": 1, 00:35:14.222 "min_cntlid": 1, 00:35:14.222 "max_cntlid": 65519, 00:35:14.222 "namespaces": [ 00:35:14.222 { 00:35:14.222 "nsid": 1, 00:35:14.222 "bdev_name": "Nvme0n1", 00:35:14.222 "name": "Nvme0n1", 00:35:14.222 "nguid": "FD66EA94F728472A8706478038227B6E", 00:35:14.222 "uuid": "fd66ea94-f728-472a-8706-478038227b6e" 00:35:14.222 } 00:35:14.222 ] 00:35:14.222 } 00:35:14.222 ] 00:35:14.222 01:21:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.222 01:21:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:14.222 01:21:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:14.222 01:21:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:14.222 EAL: No free 2048 kB hugepages reported on node 1 00:35:14.222 01:21:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:35:14.222 01:21:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:14.222 01:21:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:14.222 01:21:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:14.222 EAL: No free 2048 kB hugepages reported on node 1 00:35:14.222 01:21:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:14.222 01:21:03 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:35:14.222 01:21:03 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:14.222 01:21:03 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:14.222 01:21:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.222 01:21:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:14.222 01:21:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.222 01:21:03 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:14.222 01:21:03 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:14.222 01:21:03 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:14.222 01:21:03 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:14.222 01:21:03 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:14.222 01:21:03 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:14.222 01:21:03 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:14.222 01:21:03 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:14.222 rmmod nvme_tcp 00:35:14.222 rmmod nvme_fabrics 00:35:14.222 rmmod nvme_keyring 00:35:14.222 01:21:03 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:14.222 01:21:03 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:14.222 01:21:03 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:14.222 01:21:03 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1306831 ']' 00:35:14.222 01:21:03 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1306831 00:35:14.222 01:21:03 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1306831 ']' 00:35:14.222 01:21:03 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1306831 00:35:14.222 01:21:03 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:35:14.222 01:21:03 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:14.222 01:21:03 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1306831 00:35:14.222 01:21:03 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:14.222 01:21:03 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:14.222 01:21:03 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1306831' 00:35:14.222 killing process with pid 1306831 00:35:14.222 01:21:03 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1306831 00:35:14.222 01:21:03 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1306831 00:35:16.126 01:21:05 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:16.126 01:21:05 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:16.126 01:21:05 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:16.126 01:21:05 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:16.126 01:21:05 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:16.126 01:21:05 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:16.126 01:21:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:16.126 01:21:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:18.023 01:21:07 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:18.023 00:35:18.023 real 0m18.105s 00:35:18.023 user 0m27.046s 00:35:18.023 sys 0m2.328s 00:35:18.023 01:21:07 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:18.023 01:21:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:18.023 ************************************ 00:35:18.023 END TEST nvmf_identify_passthru 00:35:18.023 ************************************ 00:35:18.023 01:21:07 -- common/autotest_common.sh@1142 -- # return 0 00:35:18.023 01:21:07 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:18.023 01:21:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:18.023 01:21:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:18.023 01:21:07 -- common/autotest_common.sh@10 -- # set +x 00:35:18.023 ************************************ 00:35:18.023 START TEST nvmf_dif 00:35:18.023 ************************************ 00:35:18.023 01:21:07 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:18.023 * Looking for test storage... 00:35:18.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:18.023 01:21:07 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:18.023 01:21:07 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:18.023 01:21:07 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:18.023 01:21:07 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:18.023 01:21:07 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.023 01:21:07 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.023 01:21:07 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.023 01:21:07 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:18.023 01:21:07 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:18.023 01:21:07 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:18.023 01:21:07 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:18.023 01:21:07 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:18.023 01:21:07 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:18.023 01:21:07 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:18.023 01:21:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:18.023 01:21:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:18.023 01:21:07 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:18.023 01:21:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:19.919 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:19.919 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:19.919 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:19.919 01:21:09 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:19.920 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:19.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:19.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:35:19.920 00:35:19.920 --- 10.0.0.2 ping statistics --- 00:35:19.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.920 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:19.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:19.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:35:19.920 00:35:19.920 --- 10.0.0.1 ping statistics --- 00:35:19.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.920 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:19.920 01:21:09 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:21.291 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:21.291 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:21.291 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:21.291 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:21.291 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:21.291 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:21.291 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:21.291 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:21.291 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:21.291 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:21.291 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:21.291 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:21.291 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:21.291 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:21.291 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:21.291 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:21.291 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:21.291 01:21:10 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:21.291 01:21:10 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:21.291 01:21:10 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:21.291 01:21:10 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:21.291 01:21:10 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:21.291 01:21:10 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:21.291 01:21:10 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:21.291 01:21:10 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:21.291 01:21:10 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:21.291 01:21:10 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:21.291 01:21:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:21.291 01:21:10 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1310703 00:35:21.291 01:21:10 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:21.291 01:21:10 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1310703 00:35:21.291 01:21:10 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1310703 ']' 00:35:21.291 01:21:10 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:21.291 01:21:10 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:21.291 01:21:10 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:21.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:21.291 01:21:10 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:21.291 01:21:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:21.291 [2024-07-14 01:21:10.614518] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:35:21.291 [2024-07-14 01:21:10.614605] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:21.291 EAL: No free 2048 kB hugepages reported on node 1 00:35:21.291 [2024-07-14 01:21:10.678536] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.549 [2024-07-14 01:21:10.763726] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:21.549 [2024-07-14 01:21:10.763783] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:21.549 [2024-07-14 01:21:10.763811] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:21.549 [2024-07-14 01:21:10.763822] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:21.549 [2024-07-14 01:21:10.763832] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:21.549 [2024-07-14 01:21:10.763857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:21.549 01:21:10 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:21.549 01:21:10 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:35:21.549 01:21:10 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:21.549 01:21:10 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:21.549 01:21:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:21.549 01:21:10 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:21.549 01:21:10 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:21.549 01:21:10 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:21.549 01:21:10 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.549 01:21:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:21.549 [2024-07-14 01:21:10.893453] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:21.549 01:21:10 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.549 01:21:10 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:21.549 01:21:10 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:21.549 01:21:10 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:21.549 01:21:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:21.549 ************************************ 00:35:21.549 START TEST fio_dif_1_default 00:35:21.549 ************************************ 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:21.549 bdev_null0 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:21.549 [2024-07-14 01:21:10.949715] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:21.549 01:21:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:21.550 { 00:35:21.550 "params": { 00:35:21.550 "name": "Nvme$subsystem", 00:35:21.550 "trtype": "$TEST_TRANSPORT", 00:35:21.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:21.550 "adrfam": "ipv4", 00:35:21.550 "trsvcid": "$NVMF_PORT", 00:35:21.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:21.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:21.550 "hdgst": ${hdgst:-false}, 00:35:21.550 "ddgst": ${ddgst:-false} 00:35:21.550 }, 00:35:21.550 "method": "bdev_nvme_attach_controller" 00:35:21.550 } 00:35:21.550 EOF 00:35:21.550 )") 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:21.550 01:21:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:21.550 "params": { 00:35:21.550 "name": "Nvme0", 00:35:21.550 "trtype": "tcp", 00:35:21.550 "traddr": "10.0.0.2", 00:35:21.550 "adrfam": "ipv4", 00:35:21.550 "trsvcid": "4420", 00:35:21.550 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:21.550 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:21.550 "hdgst": false, 00:35:21.550 "ddgst": false 00:35:21.550 }, 00:35:21.550 "method": "bdev_nvme_attach_controller" 00:35:21.550 }' 00:35:21.807 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:21.807 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:21.807 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:21.807 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:21.807 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:21.807 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:21.807 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:21.807 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:21.807 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:21.807 01:21:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:21.807 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:21.807 fio-3.35 00:35:21.807 Starting 1 thread 00:35:22.064 EAL: No free 2048 kB hugepages reported on node 1 00:35:34.306 00:35:34.306 filename0: (groupid=0, jobs=1): err= 0: pid=1310924: Sun Jul 14 01:21:21 2024 00:35:34.306 read: IOPS=95, BW=384KiB/s (393kB/s)(3840KiB/10009msec) 00:35:34.306 slat (nsec): min=4488, max=52471, avg=9368.98, stdev=2750.84 00:35:34.306 clat (usec): min=40909, max=46680, avg=41673.81, stdev=570.97 00:35:34.306 lat (usec): min=40917, max=46694, avg=41683.18, stdev=570.95 00:35:34.306 clat percentiles (usec): 00:35:34.306 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:34.306 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:35:34.306 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:34.306 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:35:34.306 | 99.99th=[46924] 00:35:34.306 bw ( KiB/s): min= 352, max= 384, per=99.57%, avg=382.40, stdev= 7.16, samples=20 00:35:34.306 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:35:34.306 lat (msec) : 50=100.00% 00:35:34.306 cpu : usr=89.76%, sys=9.97%, ctx=18, majf=0, minf=240 00:35:34.306 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:34.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.306 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.306 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:34.306 00:35:34.306 Run status group 0 (all jobs): 00:35:34.306 READ: bw=384KiB/s (393kB/s), 384KiB/s-384KiB/s (393kB/s-393kB/s), io=3840KiB (3932kB), run=10009-10009msec 00:35:34.306 01:21:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:34.306 01:21:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:34.306 01:21:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:34.306 01:21:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:34.306 01:21:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:34.306 01:21:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:34.306 01:21:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.306 01:21:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:34.306 01:21:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.306 01:21:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:34.306 01:21:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.306 01:21:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:34.306 01:21:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.306 00:35:34.306 real 0m11.030s 00:35:34.306 user 0m10.120s 00:35:34.306 sys 0m1.250s 00:35:34.306 01:21:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:34.306 01:21:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:34.306 ************************************ 00:35:34.306 END TEST fio_dif_1_default 00:35:34.306 ************************************ 00:35:34.306 01:21:21 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:34.306 01:21:21 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:34.306 01:21:21 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:34.306 01:21:21 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:34.306 01:21:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:34.306 ************************************ 00:35:34.306 START TEST fio_dif_1_multi_subsystems 00:35:34.306 ************************************ 00:35:34.306 01:21:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:35:34.306 01:21:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:34.307 01:21:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:34.307 01:21:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:34.307 01:21:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:34.307 01:21:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:34.307 01:21:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:34.307 01:21:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:34.307 bdev_null0 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:34.307 [2024-07-14 01:21:22.027455] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:34.307 bdev_null1 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:34.307 { 00:35:34.307 "params": { 00:35:34.307 "name": "Nvme$subsystem", 00:35:34.307 "trtype": "$TEST_TRANSPORT", 00:35:34.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.307 "adrfam": "ipv4", 00:35:34.307 "trsvcid": "$NVMF_PORT", 00:35:34.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.307 "hdgst": ${hdgst:-false}, 00:35:34.307 "ddgst": ${ddgst:-false} 00:35:34.307 }, 00:35:34.307 "method": "bdev_nvme_attach_controller" 00:35:34.307 } 00:35:34.307 EOF 00:35:34.307 )") 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:34.307 { 00:35:34.307 "params": { 00:35:34.307 "name": "Nvme$subsystem", 00:35:34.307 "trtype": "$TEST_TRANSPORT", 00:35:34.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.307 "adrfam": "ipv4", 00:35:34.307 "trsvcid": "$NVMF_PORT", 00:35:34.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.307 "hdgst": ${hdgst:-false}, 00:35:34.307 "ddgst": ${ddgst:-false} 00:35:34.307 }, 00:35:34.307 "method": "bdev_nvme_attach_controller" 00:35:34.307 } 00:35:34.307 EOF 00:35:34.307 )") 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:34.307 "params": { 00:35:34.307 "name": "Nvme0", 00:35:34.307 "trtype": "tcp", 00:35:34.307 "traddr": "10.0.0.2", 00:35:34.307 "adrfam": "ipv4", 00:35:34.307 "trsvcid": "4420", 00:35:34.307 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:34.307 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:34.307 "hdgst": false, 00:35:34.307 "ddgst": false 00:35:34.307 }, 00:35:34.307 "method": "bdev_nvme_attach_controller" 00:35:34.307 },{ 00:35:34.307 "params": { 00:35:34.307 "name": "Nvme1", 00:35:34.307 "trtype": "tcp", 00:35:34.307 "traddr": "10.0.0.2", 00:35:34.307 "adrfam": "ipv4", 00:35:34.307 "trsvcid": "4420", 00:35:34.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:34.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:34.307 "hdgst": false, 00:35:34.307 "ddgst": false 00:35:34.307 }, 00:35:34.307 "method": "bdev_nvme_attach_controller" 00:35:34.307 }' 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:34.307 01:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.307 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:34.307 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:34.307 fio-3.35 00:35:34.307 Starting 2 threads 00:35:34.307 EAL: No free 2048 kB hugepages reported on node 1 00:35:44.271 00:35:44.271 filename0: (groupid=0, jobs=1): err= 0: pid=1312218: Sun Jul 14 01:21:33 2024 00:35:44.271 read: IOPS=186, BW=746KiB/s (764kB/s)(7472KiB/10021msec) 00:35:44.271 slat (nsec): min=7040, max=52459, avg=10465.94, stdev=4991.23 00:35:44.271 clat (usec): min=848, max=43899, avg=21426.75, stdev=20436.69 00:35:44.271 lat (usec): min=855, max=43934, avg=21437.22, stdev=20435.72 00:35:44.271 clat percentiles (usec): 00:35:44.271 | 1.00th=[ 873], 5.00th=[ 906], 10.00th=[ 922], 20.00th=[ 938], 00:35:44.271 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[41157], 60.00th=[41681], 00:35:44.271 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:44.271 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:35:44.271 | 99.99th=[43779] 00:35:44.271 bw ( KiB/s): min= 704, max= 768, per=56.90%, avg=745.60, stdev=31.32, samples=20 00:35:44.271 iops : min= 176, max= 192, avg=186.40, stdev= 7.83, samples=20 00:35:44.271 lat (usec) : 1000=45.29% 00:35:44.271 lat (msec) : 2=4.60%, 50=50.11% 00:35:44.271 cpu : usr=93.82%, sys=5.89%, ctx=15, majf=0, minf=115 00:35:44.271 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:44.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.271 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.271 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:44.271 filename1: (groupid=0, jobs=1): err= 0: pid=1312219: Sun Jul 14 01:21:33 2024 00:35:44.271 read: IOPS=140, BW=564KiB/s (577kB/s)(5648KiB/10017msec) 00:35:44.271 slat (nsec): min=6670, max=75372, avg=10862.21, stdev=5635.89 00:35:44.271 clat (usec): min=885, max=43863, avg=28343.94, stdev=19142.12 00:35:44.271 lat (usec): min=893, max=43883, avg=28354.80, stdev=19141.84 00:35:44.271 clat percentiles (usec): 00:35:44.271 | 1.00th=[ 914], 5.00th=[ 938], 10.00th=[ 963], 20.00th=[ 1020], 00:35:44.271 | 30.00th=[ 1057], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:35:44.271 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:44.271 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:35:44.271 | 99.99th=[43779] 00:35:44.271 bw ( KiB/s): min= 352, max= 768, per=43.00%, avg=563.20, stdev=177.53, samples=20 00:35:44.271 iops : min= 88, max= 192, avg=140.80, stdev=44.38, samples=20 00:35:44.271 lat (usec) : 1000=15.65% 00:35:44.271 lat (msec) : 2=17.21%, 50=67.14% 00:35:44.271 cpu : usr=93.66%, sys=6.05%, ctx=13, majf=0, minf=188 00:35:44.271 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:44.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.271 issued rwts: total=1412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.272 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:44.272 00:35:44.272 Run status group 0 (all jobs): 00:35:44.272 READ: bw=1309KiB/s (1341kB/s), 564KiB/s-746KiB/s (577kB/s-764kB/s), io=12.8MiB (13.4MB), run=10017-10021msec 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.272 00:35:44.272 real 0m11.348s 00:35:44.272 user 0m20.053s 00:35:44.272 sys 0m1.465s 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:44.272 01:21:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:44.272 ************************************ 00:35:44.272 END TEST fio_dif_1_multi_subsystems 00:35:44.272 ************************************ 00:35:44.272 01:21:33 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:44.272 01:21:33 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:44.272 01:21:33 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:44.272 01:21:33 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:44.272 01:21:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:44.272 ************************************ 00:35:44.272 START TEST fio_dif_rand_params 00:35:44.272 ************************************ 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:44.272 bdev_null0 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:44.272 [2024-07-14 01:21:33.431581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:44.272 { 00:35:44.272 "params": { 00:35:44.272 "name": "Nvme$subsystem", 00:35:44.272 "trtype": "$TEST_TRANSPORT", 00:35:44.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:44.272 "adrfam": "ipv4", 00:35:44.272 "trsvcid": "$NVMF_PORT", 00:35:44.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:44.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:44.272 "hdgst": ${hdgst:-false}, 00:35:44.272 "ddgst": ${ddgst:-false} 00:35:44.272 }, 00:35:44.272 "method": "bdev_nvme_attach_controller" 00:35:44.272 } 00:35:44.272 EOF 00:35:44.272 )") 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:44.272 "params": { 00:35:44.272 "name": "Nvme0", 00:35:44.272 "trtype": "tcp", 00:35:44.272 "traddr": "10.0.0.2", 00:35:44.272 "adrfam": "ipv4", 00:35:44.272 "trsvcid": "4420", 00:35:44.272 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:44.272 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:44.272 "hdgst": false, 00:35:44.272 "ddgst": false 00:35:44.272 }, 00:35:44.272 "method": "bdev_nvme_attach_controller" 00:35:44.272 }' 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:44.272 01:21:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:44.531 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:44.531 ... 00:35:44.531 fio-3.35 00:35:44.531 Starting 3 threads 00:35:44.531 EAL: No free 2048 kB hugepages reported on node 1 00:35:51.091 00:35:51.091 filename0: (groupid=0, jobs=1): err= 0: pid=1313610: Sun Jul 14 01:21:39 2024 00:35:51.091 read: IOPS=203, BW=25.4MiB/s (26.6MB/s)(128MiB/5019msec) 00:35:51.091 slat (nsec): min=4766, max=49980, avg=14596.26, stdev=4736.00 00:35:51.091 clat (usec): min=5897, max=93013, avg=14738.36, stdev=13773.46 00:35:51.091 lat (usec): min=5909, max=93036, avg=14752.95, stdev=13773.58 00:35:51.091 clat percentiles (usec): 00:35:51.091 | 1.00th=[ 6194], 5.00th=[ 6849], 10.00th=[ 7308], 20.00th=[ 8455], 00:35:51.091 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10945], 00:35:51.091 | 70.00th=[11994], 80.00th=[13173], 90.00th=[49021], 95.00th=[50594], 00:35:51.091 | 99.00th=[53740], 99.50th=[55313], 99.90th=[91751], 99.95th=[92799], 00:35:51.091 | 99.99th=[92799] 00:35:51.091 bw ( KiB/s): min=22272, max=33792, per=34.46%, avg=26035.20, stdev=3858.07, samples=10 00:35:51.091 iops : min= 174, max= 264, avg=203.40, stdev=30.14, samples=10 00:35:51.091 lat (msec) : 10=49.80%, 20=39.22%, 50=3.24%, 100=7.75% 00:35:51.091 cpu : usr=94.38%, sys=5.00%, ctx=19, majf=0, minf=99 00:35:51.091 IO depths : 1=4.0%, 2=96.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.091 issued rwts: total=1020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.091 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:51.091 filename0: (groupid=0, jobs=1): err= 0: pid=1313611: Sun Jul 14 01:21:39 2024 00:35:51.091 read: IOPS=130, BW=16.3MiB/s (17.1MB/s)(81.4MiB/5003msec) 00:35:51.091 slat (nsec): min=4838, max=43436, avg=15612.80, stdev=5113.57 00:35:51.091 clat (usec): min=5440, max=95672, avg=23029.27, stdev=19894.18 00:35:51.091 lat (usec): min=5453, max=95690, avg=23044.88, stdev=19894.67 00:35:51.091 clat percentiles (usec): 00:35:51.091 | 1.00th=[ 5669], 5.00th=[ 6521], 10.00th=[ 7898], 20.00th=[ 9110], 00:35:51.091 | 30.00th=[10290], 40.00th=[12518], 50.00th=[13698], 60.00th=[14877], 00:35:51.091 | 70.00th=[16909], 80.00th=[52167], 90.00th=[54264], 95.00th=[55313], 00:35:51.091 | 99.00th=[92799], 99.50th=[93848], 99.90th=[95945], 99.95th=[95945], 00:35:51.091 | 99.99th=[95945] 00:35:51.091 bw ( KiB/s): min=10240, max=26624, per=22.00%, avg=16617.70, stdev=4423.53, samples=10 00:35:51.091 iops : min= 80, max= 208, avg=129.80, stdev=34.56, samples=10 00:35:51.091 lat (msec) : 10=28.26%, 20=45.78%, 50=1.54%, 100=24.42% 00:35:51.091 cpu : usr=94.66%, sys=4.20%, ctx=16, majf=0, minf=101 00:35:51.091 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.091 issued rwts: total=651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.091 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:51.091 filename0: (groupid=0, jobs=1): err= 0: pid=1313612: Sun Jul 14 01:21:39 2024 00:35:51.091 read: IOPS=258, BW=32.4MiB/s (33.9MB/s)(163MiB/5044msec) 00:35:51.091 slat (nsec): min=4544, max=53794, avg=15464.74, stdev=5223.90 00:35:51.091 clat (usec): min=5427, max=93795, avg=11532.92, stdev=10360.94 00:35:51.091 lat (usec): min=5439, max=93809, avg=11548.38, stdev=10361.08 00:35:51.091 clat percentiles (usec): 00:35:51.091 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6325], 20.00th=[ 6980], 00:35:51.092 | 30.00th=[ 7635], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9634], 00:35:51.092 | 70.00th=[10421], 80.00th=[11600], 90.00th=[12911], 95.00th=[49021], 00:35:51.092 | 99.00th=[52691], 99.50th=[54789], 99.90th=[59507], 99.95th=[93848], 00:35:51.092 | 99.99th=[93848] 00:35:51.092 bw ( KiB/s): min=16896, max=42496, per=44.19%, avg=33382.40, stdev=7556.89, samples=10 00:35:51.092 iops : min= 132, max= 332, avg=260.80, stdev=59.04, samples=10 00:35:51.092 lat (msec) : 10=65.24%, 20=28.94%, 50=1.23%, 100=4.59% 00:35:51.092 cpu : usr=91.81%, sys=6.44%, ctx=383, majf=0, minf=130 00:35:51.092 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.092 issued rwts: total=1306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.092 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:51.092 00:35:51.092 Run status group 0 (all jobs): 00:35:51.092 READ: bw=73.8MiB/s (77.4MB/s), 16.3MiB/s-32.4MiB/s (17.1MB/s-33.9MB/s), io=372MiB (390MB), run=5003-5044msec 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.092 bdev_null0 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.092 [2024-07-14 01:21:39.468017] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.092 bdev_null1 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.092 bdev_null2 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.092 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:51.093 { 00:35:51.093 "params": { 00:35:51.093 "name": "Nvme$subsystem", 00:35:51.093 "trtype": "$TEST_TRANSPORT", 00:35:51.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:51.093 "adrfam": "ipv4", 00:35:51.093 "trsvcid": "$NVMF_PORT", 00:35:51.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:51.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:51.093 "hdgst": ${hdgst:-false}, 00:35:51.093 "ddgst": ${ddgst:-false} 00:35:51.093 }, 00:35:51.093 "method": "bdev_nvme_attach_controller" 00:35:51.093 } 00:35:51.093 EOF 00:35:51.093 )") 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:51.093 { 00:35:51.093 "params": { 00:35:51.093 "name": "Nvme$subsystem", 00:35:51.093 "trtype": "$TEST_TRANSPORT", 00:35:51.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:51.093 "adrfam": "ipv4", 00:35:51.093 "trsvcid": "$NVMF_PORT", 00:35:51.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:51.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:51.093 "hdgst": ${hdgst:-false}, 00:35:51.093 "ddgst": ${ddgst:-false} 00:35:51.093 }, 00:35:51.093 "method": "bdev_nvme_attach_controller" 00:35:51.093 } 00:35:51.093 EOF 00:35:51.093 )") 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:51.093 { 00:35:51.093 "params": { 00:35:51.093 "name": "Nvme$subsystem", 00:35:51.093 "trtype": "$TEST_TRANSPORT", 00:35:51.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:51.093 "adrfam": "ipv4", 00:35:51.093 "trsvcid": "$NVMF_PORT", 00:35:51.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:51.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:51.093 "hdgst": ${hdgst:-false}, 00:35:51.093 "ddgst": ${ddgst:-false} 00:35:51.093 }, 00:35:51.093 "method": "bdev_nvme_attach_controller" 00:35:51.093 } 00:35:51.093 EOF 00:35:51.093 )") 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:51.093 "params": { 00:35:51.093 "name": "Nvme0", 00:35:51.093 "trtype": "tcp", 00:35:51.093 "traddr": "10.0.0.2", 00:35:51.093 "adrfam": "ipv4", 00:35:51.093 "trsvcid": "4420", 00:35:51.093 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:51.093 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:51.093 "hdgst": false, 00:35:51.093 "ddgst": false 00:35:51.093 }, 00:35:51.093 "method": "bdev_nvme_attach_controller" 00:35:51.093 },{ 00:35:51.093 "params": { 00:35:51.093 "name": "Nvme1", 00:35:51.093 "trtype": "tcp", 00:35:51.093 "traddr": "10.0.0.2", 00:35:51.093 "adrfam": "ipv4", 00:35:51.093 "trsvcid": "4420", 00:35:51.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:51.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:51.093 "hdgst": false, 00:35:51.093 "ddgst": false 00:35:51.093 }, 00:35:51.093 "method": "bdev_nvme_attach_controller" 00:35:51.093 },{ 00:35:51.093 "params": { 00:35:51.093 "name": "Nvme2", 00:35:51.093 "trtype": "tcp", 00:35:51.093 "traddr": "10.0.0.2", 00:35:51.093 "adrfam": "ipv4", 00:35:51.093 "trsvcid": "4420", 00:35:51.093 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:51.093 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:51.093 "hdgst": false, 00:35:51.093 "ddgst": false 00:35:51.093 }, 00:35:51.093 "method": "bdev_nvme_attach_controller" 00:35:51.093 }' 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:51.093 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:51.094 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:51.094 01:21:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:51.094 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:51.094 ... 00:35:51.094 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:51.094 ... 00:35:51.094 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:51.094 ... 00:35:51.094 fio-3.35 00:35:51.094 Starting 24 threads 00:35:51.094 EAL: No free 2048 kB hugepages reported on node 1 00:36:03.300 00:36:03.300 filename0: (groupid=0, jobs=1): err= 0: pid=1314470: Sun Jul 14 01:21:50 2024 00:36:03.300 read: IOPS=87, BW=349KiB/s (357kB/s)(3520KiB/10087msec) 00:36:03.300 slat (nsec): min=14934, max=91250, avg=57156.71, stdev=10007.43 00:36:03.300 clat (msec): min=95, max=298, avg=182.92, stdev=24.80 00:36:03.300 lat (msec): min=95, max=298, avg=182.98, stdev=24.80 00:36:03.300 clat percentiles (msec): 00:36:03.300 | 1.00th=[ 96], 5.00th=[ 138], 10.00th=[ 169], 20.00th=[ 176], 00:36:03.300 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 186], 00:36:03.300 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 226], 00:36:03.300 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:36:03.300 | 99.99th=[ 300] 00:36:03.300 bw ( KiB/s): min= 256, max= 384, per=4.08%, avg=345.40, stdev=56.83, samples=20 00:36:03.300 iops : min= 64, max= 96, avg=86.35, stdev=14.21, samples=20 00:36:03.300 lat (msec) : 100=1.82%, 250=96.36%, 500=1.82% 00:36:03.300 cpu : usr=97.69%, sys=1.79%, ctx=37, majf=0, minf=9 00:36:03.300 IO depths : 1=3.9%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.6%, 32=0.0%, >=64=0.0% 00:36:03.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.300 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.300 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.300 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.300 filename0: (groupid=0, jobs=1): err= 0: pid=1314471: Sun Jul 14 01:21:50 2024 00:36:03.300 read: IOPS=93, BW=376KiB/s (385kB/s)(3800KiB/10112msec) 00:36:03.301 slat (usec): min=5, max=292, avg=32.61, stdev=23.41 00:36:03.301 clat (msec): min=6, max=189, avg=170.01, stdev=39.62 00:36:03.301 lat (msec): min=6, max=189, avg=170.04, stdev=39.62 00:36:03.301 clat percentiles (msec): 00:36:03.301 | 1.00th=[ 8], 5.00th=[ 60], 10.00th=[ 133], 20.00th=[ 176], 00:36:03.301 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 186], 00:36:03.301 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 188], 00:36:03.301 | 99.00th=[ 190], 99.50th=[ 190], 99.90th=[ 190], 99.95th=[ 190], 00:36:03.301 | 99.99th=[ 190] 00:36:03.301 bw ( KiB/s): min= 256, max= 688, per=4.42%, avg=373.60, stdev=90.55, samples=20 00:36:03.301 iops : min= 64, max= 172, avg=93.40, stdev=22.64, samples=20 00:36:03.301 lat (msec) : 10=2.11%, 20=1.05%, 50=1.68%, 100=0.84%, 250=94.32% 00:36:03.301 cpu : usr=94.57%, sys=3.16%, ctx=86, majf=0, minf=9 00:36:03.301 IO depths : 1=5.9%, 2=11.9%, 4=24.0%, 8=51.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:03.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.301 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.301 issued rwts: total=950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.301 filename0: (groupid=0, jobs=1): err= 0: pid=1314472: Sun Jul 14 01:21:50 2024 00:36:03.301 read: IOPS=87, BW=350KiB/s (359kB/s)(3520KiB/10045msec) 00:36:03.301 slat (usec): min=11, max=284, avg=30.42, stdev=18.00 00:36:03.301 clat (msec): min=113, max=233, avg=182.34, stdev= 6.87 00:36:03.301 lat (msec): min=114, max=233, avg=182.37, stdev= 6.87 00:36:03.301 clat percentiles (msec): 00:36:03.301 | 1.00th=[ 159], 5.00th=[ 171], 10.00th=[ 174], 20.00th=[ 178], 00:36:03.301 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 186], 00:36:03.301 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 188], 00:36:03.301 | 99.00th=[ 188], 99.50th=[ 190], 99.90th=[ 234], 99.95th=[ 234], 00:36:03.301 | 99.99th=[ 234] 00:36:03.301 bw ( KiB/s): min= 256, max= 384, per=4.08%, avg=345.60, stdev=60.18, samples=20 00:36:03.301 iops : min= 64, max= 96, avg=86.40, stdev=15.05, samples=20 00:36:03.301 lat (msec) : 250=100.00% 00:36:03.301 cpu : usr=95.80%, sys=2.76%, ctx=140, majf=0, minf=9 00:36:03.301 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:03.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.301 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.301 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.301 filename0: (groupid=0, jobs=1): err= 0: pid=1314473: Sun Jul 14 01:21:50 2024 00:36:03.301 read: IOPS=86, BW=344KiB/s (353kB/s)(3456KiB/10033msec) 00:36:03.301 slat (nsec): min=11801, max=66481, avg=26991.11, stdev=6326.22 00:36:03.301 clat (msec): min=123, max=335, avg=185.53, stdev=21.54 00:36:03.301 lat (msec): min=123, max=335, avg=185.56, stdev=21.54 00:36:03.301 clat percentiles (msec): 00:36:03.301 | 1.00th=[ 171], 5.00th=[ 174], 10.00th=[ 174], 20.00th=[ 180], 00:36:03.301 | 30.00th=[ 184], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 186], 00:36:03.301 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 188], 00:36:03.301 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 338], 00:36:03.301 | 99.99th=[ 338] 00:36:03.301 bw ( KiB/s): min= 128, max= 384, per=4.01%, avg=339.10, stdev=75.27, samples=20 00:36:03.301 iops : min= 32, max= 96, avg=84.75, stdev=18.85, samples=20 00:36:03.301 lat (msec) : 250=98.15%, 500=1.85% 00:36:03.301 cpu : usr=96.10%, sys=2.50%, ctx=66, majf=0, minf=9 00:36:03.301 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:03.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.301 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.301 issued rwts: total=864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.301 filename0: (groupid=0, jobs=1): err= 0: pid=1314474: Sun Jul 14 01:21:50 2024 00:36:03.301 read: IOPS=87, BW=349KiB/s (357kB/s)(3520KiB/10092msec) 00:36:03.301 slat (nsec): min=9239, max=91303, avg=39097.28, stdev=16157.22 00:36:03.301 clat (msec): min=120, max=246, avg=182.44, stdev=11.83 00:36:03.301 lat (msec): min=120, max=246, avg=182.48, stdev=11.83 00:36:03.301 clat percentiles (msec): 00:36:03.301 | 1.00th=[ 124], 5.00th=[ 171], 10.00th=[ 174], 20.00th=[ 178], 00:36:03.301 | 30.00th=[ 184], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 186], 00:36:03.301 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 188], 00:36:03.301 | 99.00th=[ 234], 99.50th=[ 239], 99.90th=[ 247], 99.95th=[ 247], 00:36:03.301 | 99.99th=[ 247] 00:36:03.301 bw ( KiB/s): min= 256, max= 384, per=4.08%, avg=345.60, stdev=60.18, samples=20 00:36:03.301 iops : min= 64, max= 96, avg=86.40, stdev=15.05, samples=20 00:36:03.301 lat (msec) : 250=100.00% 00:36:03.301 cpu : usr=97.69%, sys=1.79%, ctx=60, majf=0, minf=9 00:36:03.301 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:03.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.301 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.301 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.301 filename0: (groupid=0, jobs=1): err= 0: pid=1314475: Sun Jul 14 01:21:50 2024 00:36:03.301 read: IOPS=87, BW=350KiB/s (359kB/s)(3520KiB/10046msec) 00:36:03.301 slat (usec): min=4, max=105, avg=53.85, stdev=11.65 00:36:03.301 clat (msec): min=114, max=279, avg=182.28, stdev=25.07 00:36:03.301 lat (msec): min=114, max=279, avg=182.33, stdev=25.07 00:36:03.301 clat percentiles (msec): 00:36:03.301 | 1.00th=[ 121], 5.00th=[ 124], 10.00th=[ 161], 20.00th=[ 176], 00:36:03.301 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 186], 00:36:03.301 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 239], 00:36:03.301 | 99.00th=[ 249], 99.50th=[ 249], 99.90th=[ 279], 99.95th=[ 279], 00:36:03.301 | 99.99th=[ 279] 00:36:03.301 bw ( KiB/s): min= 256, max= 384, per=4.08%, avg=345.60, stdev=48.25, samples=20 00:36:03.301 iops : min= 64, max= 96, avg=86.40, stdev=12.06, samples=20 00:36:03.301 lat (msec) : 250=99.77%, 500=0.23% 00:36:03.301 cpu : usr=97.62%, sys=1.94%, ctx=26, majf=0, minf=9 00:36:03.301 IO depths : 1=2.4%, 2=6.9%, 4=18.3%, 8=60.7%, 16=11.7%, 32=0.0%, >=64=0.0% 00:36:03.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.301 complete : 0=0.0%, 4=92.9%, 8=3.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.301 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.301 filename0: (groupid=0, jobs=1): err= 0: pid=1314476: Sun Jul 14 01:21:50 2024 00:36:03.301 read: IOPS=87, BW=349KiB/s (357kB/s)(3520KiB/10094msec) 00:36:03.301 slat (usec): min=5, max=279, avg=53.57, stdev=24.75 00:36:03.301 clat (msec): min=95, max=345, avg=183.04, stdev=21.90 00:36:03.301 lat (msec): min=95, max=345, avg=183.09, stdev=21.89 00:36:03.301 clat percentiles (msec): 00:36:03.301 | 1.00th=[ 96], 5.00th=[ 169], 10.00th=[ 171], 20.00th=[ 178], 00:36:03.301 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 184], 60.00th=[ 186], 00:36:03.301 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 188], 00:36:03.301 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 347], 99.95th=[ 347], 00:36:03.301 | 99.99th=[ 347] 00:36:03.301 bw ( KiB/s): min= 240, max= 400, per=4.08%, avg=345.60, stdev=62.16, samples=20 00:36:03.301 iops : min= 60, max= 100, avg=86.40, stdev=15.54, samples=20 00:36:03.301 lat (msec) : 100=1.82%, 250=96.36%, 500=1.82% 00:36:03.301 cpu : usr=94.89%, sys=3.25%, ctx=292, majf=0, minf=9 00:36:03.301 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:03.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.301 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.301 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.301 filename0: (groupid=0, jobs=1): err= 0: pid=1314477: Sun Jul 14 01:21:50 2024 00:36:03.301 read: IOPS=89, BW=356KiB/s (365kB/s)(3584KiB/10059msec) 00:36:03.301 slat (usec): min=9, max=168, avg=55.92, stdev=19.66 00:36:03.301 clat (msec): min=62, max=211, avg=178.43, stdev=20.29 00:36:03.301 lat (msec): min=62, max=211, avg=178.49, stdev=20.29 00:36:03.301 clat percentiles (msec): 00:36:03.301 | 1.00th=[ 63], 5.00th=[ 165], 10.00th=[ 171], 20.00th=[ 176], 00:36:03.301 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 184], 60.00th=[ 186], 00:36:03.301 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 188], 00:36:03.301 | 99.00th=[ 194], 99.50th=[ 194], 99.90th=[ 211], 99.95th=[ 211], 00:36:03.301 | 99.99th=[ 211] 00:36:03.301 bw ( KiB/s): min= 256, max= 512, per=4.17%, avg=352.00, stdev=70.42, samples=20 00:36:03.301 iops : min= 64, max= 128, avg=88.00, stdev=17.60, samples=20 00:36:03.301 lat (msec) : 100=3.35%, 250=96.65% 00:36:03.301 cpu : usr=93.81%, sys=3.47%, ctx=161, majf=0, minf=9 00:36:03.301 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:03.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.301 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.301 issued rwts: total=896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.301 filename1: (groupid=0, jobs=1): err= 0: pid=1314478: Sun Jul 14 01:21:50 2024 00:36:03.301 read: IOPS=93, BW=373KiB/s (382kB/s)(3768KiB/10113msec) 00:36:03.301 slat (nsec): min=5060, max=93350, avg=49578.00, stdev=14038.37 00:36:03.301 clat (msec): min=6, max=267, avg=171.17, stdev=43.93 00:36:03.301 lat (msec): min=6, max=267, avg=171.21, stdev=43.94 00:36:03.301 clat percentiles (msec): 00:36:03.301 | 1.00th=[ 7], 5.00th=[ 54], 10.00th=[ 111], 20.00th=[ 174], 00:36:03.301 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 186], 00:36:03.301 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 188], 00:36:03.301 | 99.00th=[ 259], 99.50th=[ 259], 99.90th=[ 268], 99.95th=[ 268], 00:36:03.301 | 99.99th=[ 268] 00:36:03.301 bw ( KiB/s): min= 256, max= 768, per=4.38%, avg=370.40, stdev=108.30, samples=20 00:36:03.301 iops : min= 64, max= 192, avg=92.60, stdev=27.08, samples=20 00:36:03.301 lat (msec) : 10=3.40%, 100=5.10%, 250=87.69%, 500=3.82% 00:36:03.301 cpu : usr=96.60%, sys=2.19%, ctx=32, majf=0, minf=9 00:36:03.301 IO depths : 1=3.9%, 2=10.2%, 4=25.1%, 8=52.3%, 16=8.5%, 32=0.0%, >=64=0.0% 00:36:03.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.301 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.301 issued rwts: total=942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.301 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.301 filename1: (groupid=0, jobs=1): err= 0: pid=1314479: Sun Jul 14 01:21:50 2024 00:36:03.301 read: IOPS=87, BW=349KiB/s (357kB/s)(3520KiB/10092msec) 00:36:03.301 slat (nsec): min=7246, max=89803, avg=53075.45, stdev=12820.43 00:36:03.301 clat (msec): min=120, max=251, avg=182.31, stdev=20.47 00:36:03.301 lat (msec): min=120, max=251, avg=182.36, stdev=20.47 00:36:03.301 clat percentiles (msec): 00:36:03.302 | 1.00th=[ 122], 5.00th=[ 124], 10.00th=[ 171], 20.00th=[ 178], 00:36:03.302 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 184], 60.00th=[ 186], 00:36:03.302 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 234], 00:36:03.302 | 99.00th=[ 249], 99.50th=[ 249], 99.90th=[ 251], 99.95th=[ 251], 00:36:03.302 | 99.99th=[ 251] 00:36:03.302 bw ( KiB/s): min= 256, max= 384, per=4.08%, avg=345.60, stdev=56.96, samples=20 00:36:03.302 iops : min= 64, max= 96, avg=86.40, stdev=14.24, samples=20 00:36:03.302 lat (msec) : 250=99.77%, 500=0.23% 00:36:03.302 cpu : usr=96.41%, sys=2.31%, ctx=82, majf=0, minf=9 00:36:03.302 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:36:03.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.302 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.302 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.302 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.302 filename1: (groupid=0, jobs=1): err= 0: pid=1314480: Sun Jul 14 01:21:50 2024 00:36:03.302 read: IOPS=87, BW=349KiB/s (357kB/s)(3520KiB/10086msec) 00:36:03.302 slat (nsec): min=19459, max=99976, avg=58298.88, stdev=9157.27 00:36:03.302 clat (msec): min=94, max=297, avg=182.85, stdev=20.22 00:36:03.302 lat (msec): min=94, max=297, avg=182.90, stdev=20.22 00:36:03.302 clat percentiles (msec): 00:36:03.302 | 1.00th=[ 95], 5.00th=[ 169], 10.00th=[ 171], 20.00th=[ 178], 00:36:03.302 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 184], 60.00th=[ 186], 00:36:03.302 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 188], 00:36:03.302 | 99.00th=[ 296], 99.50th=[ 296], 99.90th=[ 296], 99.95th=[ 296], 00:36:03.302 | 99.99th=[ 296] 00:36:03.302 bw ( KiB/s): min= 256, max= 384, per=4.08%, avg=345.40, stdev=60.05, samples=20 00:36:03.302 iops : min= 64, max= 96, avg=86.35, stdev=15.01, samples=20 00:36:03.302 lat (msec) : 100=1.82%, 250=96.36%, 500=1.82% 00:36:03.302 cpu : usr=94.07%, sys=3.18%, ctx=196, majf=0, minf=9 00:36:03.302 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:03.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.302 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.302 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.302 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.302 filename1: (groupid=0, jobs=1): err= 0: pid=1314481: Sun Jul 14 01:21:50 2024 00:36:03.302 read: IOPS=86, BW=345KiB/s (353kB/s)(3456KiB/10031msec) 00:36:03.302 slat (nsec): min=15309, max=65105, avg=27326.98, stdev=6762.49 00:36:03.302 clat (msec): min=121, max=434, avg=185.51, stdev=24.78 00:36:03.302 lat (msec): min=121, max=434, avg=185.53, stdev=24.78 00:36:03.302 clat percentiles (msec): 00:36:03.302 | 1.00th=[ 123], 5.00th=[ 174], 10.00th=[ 174], 20.00th=[ 178], 00:36:03.302 | 30.00th=[ 184], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 186], 00:36:03.302 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 188], 00:36:03.302 | 99.00th=[ 334], 99.50th=[ 334], 99.90th=[ 435], 99.95th=[ 435], 00:36:03.302 | 99.99th=[ 435] 00:36:03.302 bw ( KiB/s): min= 128, max= 384, per=4.01%, avg=339.20, stdev=75.15, samples=20 00:36:03.302 iops : min= 32, max= 96, avg=84.80, stdev=18.79, samples=20 00:36:03.302 lat (msec) : 250=98.15%, 500=1.85% 00:36:03.302 cpu : usr=97.69%, sys=1.76%, ctx=43, majf=0, minf=9 00:36:03.302 IO depths : 1=5.7%, 2=11.8%, 4=24.7%, 8=51.0%, 16=6.8%, 32=0.0%, >=64=0.0% 00:36:03.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.302 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.302 issued rwts: total=864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.302 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.302 filename1: (groupid=0, jobs=1): err= 0: pid=1314482: Sun Jul 14 01:21:50 2024 00:36:03.302 read: IOPS=90, BW=361KiB/s (370kB/s)(3648KiB/10107msec) 00:36:03.302 slat (nsec): min=7629, max=74361, avg=30224.11, stdev=12194.76 00:36:03.302 clat (msec): min=62, max=267, avg=177.04, stdev=31.93 00:36:03.302 lat (msec): min=62, max=267, avg=177.07, stdev=31.93 00:36:03.302 clat percentiles (msec): 00:36:03.302 | 1.00th=[ 63], 5.00th=[ 96], 10.00th=[ 161], 20.00th=[ 176], 00:36:03.302 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 184], 60.00th=[ 186], 00:36:03.302 | 70.00th=[ 186], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:36:03.302 | 99.00th=[ 262], 99.50th=[ 262], 99.90th=[ 268], 99.95th=[ 268], 00:36:03.302 | 99.99th=[ 268] 00:36:03.302 bw ( KiB/s): min= 256, max= 512, per=4.24%, avg=358.40, stdev=64.29, samples=20 00:36:03.302 iops : min= 64, max= 128, avg=89.60, stdev=16.07, samples=20 00:36:03.302 lat (msec) : 100=5.48%, 250=90.35%, 500=4.17% 00:36:03.302 cpu : usr=97.50%, sys=2.18%, ctx=14, majf=0, minf=9 00:36:03.302 IO depths : 1=3.8%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:36:03.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.302 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.302 issued rwts: total=912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.302 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.302 filename1: (groupid=0, jobs=1): err= 0: pid=1314483: Sun Jul 14 01:21:50 2024 00:36:03.302 read: IOPS=87, BW=350KiB/s (359kB/s)(3520KiB/10045msec) 00:36:03.302 slat (usec): min=7, max=394, avg=34.74, stdev=28.81 00:36:03.302 clat (msec): min=113, max=251, avg=182.36, stdev=20.91 00:36:03.302 lat (msec): min=113, max=251, avg=182.39, stdev=20.90 00:36:03.302 clat percentiles (msec): 00:36:03.302 | 1.00th=[ 121], 5.00th=[ 125], 10.00th=[ 171], 20.00th=[ 178], 00:36:03.302 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 186], 00:36:03.302 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 234], 00:36:03.302 | 99.00th=[ 249], 99.50th=[ 249], 99.90th=[ 253], 99.95th=[ 253], 00:36:03.302 | 99.99th=[ 253] 00:36:03.302 bw ( KiB/s): min= 256, max= 384, per=4.08%, avg=345.60, stdev=56.96, samples=20 00:36:03.302 iops : min= 64, max= 96, avg=86.40, stdev=14.24, samples=20 00:36:03.302 lat (msec) : 250=99.77%, 500=0.23% 00:36:03.302 cpu : usr=94.18%, sys=3.18%, ctx=209, majf=0, minf=9 00:36:03.302 IO depths : 1=3.5%, 2=9.8%, 4=25.0%, 8=52.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:36:03.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.302 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.302 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.302 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.302 filename1: (groupid=0, jobs=1): err= 0: pid=1314484: Sun Jul 14 01:21:50 2024 00:36:03.302 read: IOPS=87, BW=348KiB/s (357kB/s)(3512KiB/10084msec) 00:36:03.302 slat (nsec): min=21531, max=94011, avg=56690.30, stdev=9422.57 00:36:03.302 clat (msec): min=96, max=294, avg=183.13, stdev=20.83 00:36:03.302 lat (msec): min=96, max=294, avg=183.19, stdev=20.83 00:36:03.302 clat percentiles (msec): 00:36:03.302 | 1.00th=[ 97], 5.00th=[ 169], 10.00th=[ 171], 20.00th=[ 178], 00:36:03.302 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 186], 00:36:03.302 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 194], 00:36:03.302 | 99.00th=[ 296], 99.50th=[ 296], 99.90th=[ 296], 99.95th=[ 296], 00:36:03.302 | 99.99th=[ 296] 00:36:03.302 bw ( KiB/s): min= 256, max= 384, per=4.07%, avg=344.80, stdev=59.75, samples=20 00:36:03.302 iops : min= 64, max= 96, avg=86.20, stdev=14.94, samples=20 00:36:03.302 lat (msec) : 100=1.59%, 250=96.58%, 500=1.82% 00:36:03.302 cpu : usr=97.70%, sys=1.83%, ctx=24, majf=0, minf=9 00:36:03.302 IO depths : 1=5.4%, 2=11.6%, 4=25.1%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:36:03.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.302 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.302 issued rwts: total=878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.302 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.302 filename1: (groupid=0, jobs=1): err= 0: pid=1314485: Sun Jul 14 01:21:50 2024 00:36:03.302 read: IOPS=85, BW=343KiB/s (351kB/s)(3456KiB/10077msec) 00:36:03.302 slat (usec): min=7, max=160, avg=53.20, stdev=17.45 00:36:03.302 clat (msec): min=116, max=431, avg=185.41, stdev=30.83 00:36:03.302 lat (msec): min=116, max=431, avg=185.46, stdev=30.82 00:36:03.302 clat percentiles (msec): 00:36:03.302 | 1.00th=[ 123], 5.00th=[ 125], 10.00th=[ 171], 20.00th=[ 178], 00:36:03.302 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 184], 60.00th=[ 186], 00:36:03.302 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 245], 00:36:03.302 | 99.00th=[ 334], 99.50th=[ 334], 99.90th=[ 430], 99.95th=[ 430], 00:36:03.302 | 99.99th=[ 430] 00:36:03.302 bw ( KiB/s): min= 128, max= 400, per=4.01%, avg=339.20, stdev=72.97, samples=20 00:36:03.302 iops : min= 32, max= 100, avg=84.80, stdev=18.24, samples=20 00:36:03.302 lat (msec) : 250=98.15%, 500=1.85% 00:36:03.302 cpu : usr=93.44%, sys=3.69%, ctx=307, majf=0, minf=9 00:36:03.302 IO depths : 1=3.0%, 2=9.1%, 4=24.7%, 8=53.7%, 16=9.5%, 32=0.0%, >=64=0.0% 00:36:03.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.302 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.302 issued rwts: total=864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.302 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.302 filename2: (groupid=0, jobs=1): err= 0: pid=1314486: Sun Jul 14 01:21:50 2024 00:36:03.302 read: IOPS=87, BW=350KiB/s (359kB/s)(3520KiB/10045msec) 00:36:03.302 slat (nsec): min=11569, max=81017, avg=49060.43, stdev=12972.78 00:36:03.302 clat (msec): min=114, max=251, avg=182.24, stdev=20.60 00:36:03.302 lat (msec): min=114, max=251, avg=182.29, stdev=20.60 00:36:03.302 clat percentiles (msec): 00:36:03.302 | 1.00th=[ 123], 5.00th=[ 125], 10.00th=[ 171], 20.00th=[ 178], 00:36:03.302 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 184], 60.00th=[ 186], 00:36:03.302 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 234], 00:36:03.302 | 99.00th=[ 249], 99.50th=[ 249], 99.90th=[ 251], 99.95th=[ 251], 00:36:03.302 | 99.99th=[ 251] 00:36:03.302 bw ( KiB/s): min= 256, max= 384, per=4.08%, avg=345.60, stdev=56.96, samples=20 00:36:03.302 iops : min= 64, max= 96, avg=86.40, stdev=14.24, samples=20 00:36:03.302 lat (msec) : 250=99.77%, 500=0.23% 00:36:03.302 cpu : usr=97.66%, sys=1.84%, ctx=228, majf=0, minf=9 00:36:03.302 IO depths : 1=3.1%, 2=9.3%, 4=25.0%, 8=53.2%, 16=9.4%, 32=0.0%, >=64=0.0% 00:36:03.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.302 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.302 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.302 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.302 filename2: (groupid=0, jobs=1): err= 0: pid=1314487: Sun Jul 14 01:21:50 2024 00:36:03.302 read: IOPS=89, BW=358KiB/s (366kB/s)(3584KiB/10016msec) 00:36:03.302 slat (usec): min=8, max=226, avg=54.45, stdev=12.33 00:36:03.302 clat (msec): min=62, max=194, avg=178.36, stdev=20.31 00:36:03.302 lat (msec): min=62, max=194, avg=178.42, stdev=20.31 00:36:03.302 clat percentiles (msec): 00:36:03.302 | 1.00th=[ 63], 5.00th=[ 165], 10.00th=[ 171], 20.00th=[ 176], 00:36:03.302 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 184], 60.00th=[ 186], 00:36:03.302 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 188], 00:36:03.303 | 99.00th=[ 194], 99.50th=[ 194], 99.90th=[ 194], 99.95th=[ 194], 00:36:03.303 | 99.99th=[ 194] 00:36:03.303 bw ( KiB/s): min= 256, max= 512, per=4.17%, avg=352.00, stdev=70.42, samples=20 00:36:03.303 iops : min= 64, max= 128, avg=88.00, stdev=17.60, samples=20 00:36:03.303 lat (msec) : 100=3.57%, 250=96.43% 00:36:03.303 cpu : usr=97.47%, sys=1.90%, ctx=49, majf=0, minf=9 00:36:03.303 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:03.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.303 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.303 issued rwts: total=896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.303 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.303 filename2: (groupid=0, jobs=1): err= 0: pid=1314488: Sun Jul 14 01:21:50 2024 00:36:03.303 read: IOPS=90, BW=361KiB/s (369kB/s)(3648KiB/10116msec) 00:36:03.303 slat (usec): min=6, max=257, avg=32.96, stdev=16.15 00:36:03.303 clat (msec): min=66, max=262, avg=177.13, stdev=28.50 00:36:03.303 lat (msec): min=66, max=262, avg=177.16, stdev=28.50 00:36:03.303 clat percentiles (msec): 00:36:03.303 | 1.00th=[ 67], 5.00th=[ 96], 10.00th=[ 165], 20.00th=[ 176], 00:36:03.303 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 184], 60.00th=[ 186], 00:36:03.303 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 188], 00:36:03.303 | 99.00th=[ 262], 99.50th=[ 262], 99.90th=[ 264], 99.95th=[ 264], 00:36:03.303 | 99.99th=[ 264] 00:36:03.303 bw ( KiB/s): min= 256, max= 512, per=4.24%, avg=358.40, stdev=62.60, samples=20 00:36:03.303 iops : min= 64, max= 128, avg=89.60, stdev=15.65, samples=20 00:36:03.303 lat (msec) : 100=5.26%, 250=92.32%, 500=2.41% 00:36:03.303 cpu : usr=97.16%, sys=2.37%, ctx=21, majf=0, minf=9 00:36:03.303 IO depths : 1=3.7%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:36:03.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.303 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.303 issued rwts: total=912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.303 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.303 filename2: (groupid=0, jobs=1): err= 0: pid=1314489: Sun Jul 14 01:21:50 2024 00:36:03.303 read: IOPS=93, BW=375KiB/s (384kB/s)(3776KiB/10066msec) 00:36:03.303 slat (nsec): min=4394, max=97531, avg=48604.60, stdev=15603.88 00:36:03.303 clat (msec): min=7, max=263, avg=169.59, stdev=45.16 00:36:03.303 lat (msec): min=7, max=263, avg=169.63, stdev=45.17 00:36:03.303 clat percentiles (msec): 00:36:03.303 | 1.00th=[ 8], 5.00th=[ 45], 10.00th=[ 113], 20.00th=[ 174], 00:36:03.303 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 186], 00:36:03.303 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 190], 00:36:03.303 | 99.00th=[ 262], 99.50th=[ 262], 99.90th=[ 264], 99.95th=[ 264], 00:36:03.303 | 99.99th=[ 264] 00:36:03.303 bw ( KiB/s): min= 256, max= 768, per=4.39%, avg=371.20, stdev=106.46, samples=20 00:36:03.303 iops : min= 64, max= 192, avg=92.80, stdev=26.62, samples=20 00:36:03.303 lat (msec) : 10=3.39%, 50=1.69%, 100=1.69%, 250=90.47%, 500=2.75% 00:36:03.303 cpu : usr=97.56%, sys=1.87%, ctx=63, majf=0, minf=9 00:36:03.303 IO depths : 1=3.8%, 2=9.9%, 4=24.4%, 8=53.3%, 16=8.7%, 32=0.0%, >=64=0.0% 00:36:03.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.303 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.303 issued rwts: total=944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.303 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.303 filename2: (groupid=0, jobs=1): err= 0: pid=1314490: Sun Jul 14 01:21:50 2024 00:36:03.303 read: IOPS=87, BW=349KiB/s (357kB/s)(3520KiB/10089msec) 00:36:03.303 slat (nsec): min=35809, max=81598, avg=53220.76, stdev=7603.82 00:36:03.303 clat (msec): min=99, max=267, avg=182.94, stdev=10.24 00:36:03.303 lat (msec): min=99, max=267, avg=182.99, stdev=10.23 00:36:03.303 clat percentiles (msec): 00:36:03.303 | 1.00th=[ 161], 5.00th=[ 171], 10.00th=[ 174], 20.00th=[ 178], 00:36:03.303 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 184], 60.00th=[ 186], 00:36:03.303 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 188], 00:36:03.303 | 99.00th=[ 230], 99.50th=[ 230], 99.90th=[ 268], 99.95th=[ 268], 00:36:03.303 | 99.99th=[ 268] 00:36:03.303 bw ( KiB/s): min= 256, max= 384, per=4.08%, avg=345.60, stdev=60.18, samples=20 00:36:03.303 iops : min= 64, max= 96, avg=86.40, stdev=15.05, samples=20 00:36:03.303 lat (msec) : 100=0.23%, 250=99.55%, 500=0.23% 00:36:03.303 cpu : usr=97.80%, sys=1.75%, ctx=34, majf=0, minf=9 00:36:03.303 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:03.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.303 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.303 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.303 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.303 filename2: (groupid=0, jobs=1): err= 0: pid=1314491: Sun Jul 14 01:21:50 2024 00:36:03.303 read: IOPS=87, BW=349KiB/s (357kB/s)(3520KiB/10093msec) 00:36:03.303 slat (nsec): min=4009, max=47873, avg=25435.67, stdev=5404.91 00:36:03.303 clat (msec): min=96, max=301, avg=183.22, stdev=20.57 00:36:03.303 lat (msec): min=96, max=301, avg=183.24, stdev=20.57 00:36:03.303 clat percentiles (msec): 00:36:03.303 | 1.00th=[ 96], 5.00th=[ 169], 10.00th=[ 171], 20.00th=[ 178], 00:36:03.303 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 186], 00:36:03.303 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 188], 00:36:03.303 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:36:03.303 | 99.99th=[ 300] 00:36:03.303 bw ( KiB/s): min= 256, max= 384, per=4.08%, avg=345.60, stdev=60.18, samples=20 00:36:03.303 iops : min= 64, max= 96, avg=86.40, stdev=15.05, samples=20 00:36:03.303 lat (msec) : 100=1.82%, 250=96.36%, 500=1.82% 00:36:03.303 cpu : usr=95.40%, sys=3.04%, ctx=36, majf=0, minf=9 00:36:03.303 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:03.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.303 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.303 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.303 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.303 filename2: (groupid=0, jobs=1): err= 0: pid=1314492: Sun Jul 14 01:21:50 2024 00:36:03.303 read: IOPS=87, BW=349KiB/s (357kB/s)(3520KiB/10092msec) 00:36:03.303 slat (usec): min=11, max=179, avg=32.71, stdev=16.93 00:36:03.303 clat (msec): min=120, max=249, avg=182.50, stdev=20.30 00:36:03.303 lat (msec): min=120, max=249, avg=182.53, stdev=20.30 00:36:03.303 clat percentiles (msec): 00:36:03.303 | 1.00th=[ 122], 5.00th=[ 125], 10.00th=[ 171], 20.00th=[ 178], 00:36:03.303 | 30.00th=[ 184], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 186], 00:36:03.303 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 230], 00:36:03.303 | 99.00th=[ 249], 99.50th=[ 249], 99.90th=[ 249], 99.95th=[ 249], 00:36:03.303 | 99.99th=[ 249] 00:36:03.303 bw ( KiB/s): min= 256, max= 384, per=4.08%, avg=345.60, stdev=56.96, samples=20 00:36:03.303 iops : min= 64, max= 96, avg=86.40, stdev=14.24, samples=20 00:36:03.303 lat (msec) : 250=100.00% 00:36:03.303 cpu : usr=96.22%, sys=2.50%, ctx=59, majf=0, minf=9 00:36:03.303 IO depths : 1=3.1%, 2=9.3%, 4=25.0%, 8=53.2%, 16=9.4%, 32=0.0%, >=64=0.0% 00:36:03.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.303 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.303 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.303 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.303 filename2: (groupid=0, jobs=1): err= 0: pid=1314493: Sun Jul 14 01:21:50 2024 00:36:03.303 read: IOPS=87, BW=349KiB/s (357kB/s)(3520KiB/10083msec) 00:36:03.303 slat (nsec): min=23734, max=92066, avg=56799.50, stdev=8921.48 00:36:03.303 clat (msec): min=95, max=294, avg=182.85, stdev=24.47 00:36:03.303 lat (msec): min=95, max=294, avg=182.91, stdev=24.47 00:36:03.303 clat percentiles (msec): 00:36:03.303 | 1.00th=[ 96], 5.00th=[ 138], 10.00th=[ 169], 20.00th=[ 176], 00:36:03.303 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 186], 00:36:03.303 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 226], 00:36:03.303 | 99.00th=[ 296], 99.50th=[ 296], 99.90th=[ 296], 99.95th=[ 296], 00:36:03.303 | 99.99th=[ 296] 00:36:03.303 bw ( KiB/s): min= 256, max= 384, per=4.08%, avg=345.60, stdev=56.96, samples=20 00:36:03.303 iops : min= 64, max= 96, avg=86.40, stdev=14.24, samples=20 00:36:03.303 lat (msec) : 100=1.82%, 250=96.36%, 500=1.82% 00:36:03.303 cpu : usr=97.84%, sys=1.72%, ctx=27, majf=0, minf=9 00:36:03.303 IO depths : 1=3.9%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.6%, 32=0.0%, >=64=0.0% 00:36:03.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.303 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.303 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.303 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:03.303 00:36:03.303 Run status group 0 (all jobs): 00:36:03.303 READ: bw=8447KiB/s (8650kB/s), 343KiB/s-376KiB/s (351kB/s-385kB/s), io=83.4MiB (87.5MB), run=10016-10116msec 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.303 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.304 bdev_null0 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.304 [2024-07-14 01:21:51.130807] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.304 bdev_null1 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:03.304 { 00:36:03.304 "params": { 00:36:03.304 "name": "Nvme$subsystem", 00:36:03.304 "trtype": "$TEST_TRANSPORT", 00:36:03.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.304 "adrfam": "ipv4", 00:36:03.304 "trsvcid": "$NVMF_PORT", 00:36:03.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.304 "hdgst": ${hdgst:-false}, 00:36:03.304 "ddgst": ${ddgst:-false} 00:36:03.304 }, 00:36:03.304 "method": "bdev_nvme_attach_controller" 00:36:03.304 } 00:36:03.304 EOF 00:36:03.304 )") 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:03.304 { 00:36:03.304 "params": { 00:36:03.304 "name": "Nvme$subsystem", 00:36:03.304 "trtype": "$TEST_TRANSPORT", 00:36:03.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.304 "adrfam": "ipv4", 00:36:03.304 "trsvcid": "$NVMF_PORT", 00:36:03.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.304 "hdgst": ${hdgst:-false}, 00:36:03.304 "ddgst": ${ddgst:-false} 00:36:03.304 }, 00:36:03.304 "method": "bdev_nvme_attach_controller" 00:36:03.304 } 00:36:03.304 EOF 00:36:03.304 )") 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:03.304 "params": { 00:36:03.304 "name": "Nvme0", 00:36:03.304 "trtype": "tcp", 00:36:03.304 "traddr": "10.0.0.2", 00:36:03.304 "adrfam": "ipv4", 00:36:03.304 "trsvcid": "4420", 00:36:03.304 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:03.304 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:03.304 "hdgst": false, 00:36:03.304 "ddgst": false 00:36:03.304 }, 00:36:03.304 "method": "bdev_nvme_attach_controller" 00:36:03.304 },{ 00:36:03.304 "params": { 00:36:03.304 "name": "Nvme1", 00:36:03.304 "trtype": "tcp", 00:36:03.304 "traddr": "10.0.0.2", 00:36:03.304 "adrfam": "ipv4", 00:36:03.304 "trsvcid": "4420", 00:36:03.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:03.304 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:03.304 "hdgst": false, 00:36:03.304 "ddgst": false 00:36:03.304 }, 00:36:03.304 "method": "bdev_nvme_attach_controller" 00:36:03.304 }' 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:03.304 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:03.305 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:03.305 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:03.305 01:21:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.305 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:03.305 ... 00:36:03.305 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:03.305 ... 00:36:03.305 fio-3.35 00:36:03.305 Starting 4 threads 00:36:03.305 EAL: No free 2048 kB hugepages reported on node 1 00:36:08.564 00:36:08.564 filename0: (groupid=0, jobs=1): err= 0: pid=1315882: Sun Jul 14 01:21:57 2024 00:36:08.564 read: IOPS=1738, BW=13.6MiB/s (14.2MB/s)(67.9MiB/5002msec) 00:36:08.564 slat (nsec): min=7417, max=59587, avg=14320.92, stdev=6863.21 00:36:08.564 clat (usec): min=1676, max=8370, avg=4561.77, stdev=741.73 00:36:08.564 lat (usec): min=1693, max=8390, avg=4576.09, stdev=740.56 00:36:08.564 clat percentiles (usec): 00:36:08.564 | 1.00th=[ 3458], 5.00th=[ 3785], 10.00th=[ 3916], 20.00th=[ 4080], 00:36:08.564 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:36:08.564 | 70.00th=[ 4555], 80.00th=[ 4752], 90.00th=[ 6063], 95.00th=[ 6390], 00:36:08.564 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[ 7635], 99.95th=[ 7767], 00:36:08.564 | 99.99th=[ 8356] 00:36:08.564 bw ( KiB/s): min=13152, max=14608, per=25.16%, avg=13864.44, stdev=454.39, samples=9 00:36:08.564 iops : min= 1644, max= 1826, avg=1733.00, stdev=56.75, samples=9 00:36:08.564 lat (msec) : 2=0.02%, 4=14.25%, 10=85.73% 00:36:08.564 cpu : usr=94.40%, sys=4.64%, ctx=159, majf=0, minf=53 00:36:08.564 IO depths : 1=0.1%, 2=0.7%, 4=68.7%, 8=30.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:08.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.564 complete : 0=0.0%, 4=95.2%, 8=4.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.564 issued rwts: total=8694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.564 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:08.564 filename0: (groupid=0, jobs=1): err= 0: pid=1315883: Sun Jul 14 01:21:57 2024 00:36:08.564 read: IOPS=1723, BW=13.5MiB/s (14.1MB/s)(67.4MiB/5004msec) 00:36:08.564 slat (nsec): min=7230, max=55105, avg=11193.37, stdev=4992.34 00:36:08.564 clat (usec): min=2161, max=8489, avg=4605.87, stdev=850.35 00:36:08.564 lat (usec): min=2174, max=8501, avg=4617.06, stdev=849.51 00:36:08.564 clat percentiles (usec): 00:36:08.564 | 1.00th=[ 3261], 5.00th=[ 3687], 10.00th=[ 3851], 20.00th=[ 4047], 00:36:08.564 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4490], 00:36:08.564 | 70.00th=[ 4555], 80.00th=[ 4817], 90.00th=[ 6259], 95.00th=[ 6521], 00:36:08.564 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 7570], 99.95th=[ 7701], 00:36:08.564 | 99.99th=[ 8455] 00:36:08.564 bw ( KiB/s): min=13408, max=14352, per=25.02%, avg=13787.20, stdev=351.44, samples=10 00:36:08.564 iops : min= 1676, max= 1794, avg=1723.40, stdev=43.93, samples=10 00:36:08.564 lat (msec) : 4=17.02%, 10=82.98% 00:36:08.564 cpu : usr=95.60%, sys=3.94%, ctx=10, majf=0, minf=32 00:36:08.564 IO depths : 1=0.1%, 2=2.2%, 4=70.3%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:08.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.564 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.564 issued rwts: total=8625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.564 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:08.564 filename1: (groupid=0, jobs=1): err= 0: pid=1315884: Sun Jul 14 01:21:57 2024 00:36:08.564 read: IOPS=1726, BW=13.5MiB/s (14.1MB/s)(67.5MiB/5003msec) 00:36:08.564 slat (nsec): min=7240, max=51445, avg=11130.43, stdev=4881.91 00:36:08.564 clat (usec): min=2267, max=45589, avg=4598.48, stdev=1490.64 00:36:08.564 lat (usec): min=2282, max=45613, avg=4609.61, stdev=1490.33 00:36:08.564 clat percentiles (usec): 00:36:08.564 | 1.00th=[ 3228], 5.00th=[ 3654], 10.00th=[ 3884], 20.00th=[ 4080], 00:36:08.564 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4424], 00:36:08.564 | 70.00th=[ 4490], 80.00th=[ 4752], 90.00th=[ 6194], 95.00th=[ 6456], 00:36:08.564 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 8356], 99.95th=[45351], 00:36:08.564 | 99.99th=[45351] 00:36:08.564 bw ( KiB/s): min=12976, max=14896, per=24.98%, avg=13765.33, stdev=667.08, samples=9 00:36:08.564 iops : min= 1622, max= 1862, avg=1720.67, stdev=83.38, samples=9 00:36:08.564 lat (msec) : 4=13.95%, 10=85.96%, 50=0.09% 00:36:08.565 cpu : usr=95.40%, sys=4.14%, ctx=7, majf=0, minf=82 00:36:08.565 IO depths : 1=0.1%, 2=1.7%, 4=69.9%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:08.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.565 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.565 issued rwts: total=8639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.565 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:08.565 filename1: (groupid=0, jobs=1): err= 0: pid=1315885: Sun Jul 14 01:21:57 2024 00:36:08.565 read: IOPS=1701, BW=13.3MiB/s (13.9MB/s)(66.5MiB/5001msec) 00:36:08.565 slat (nsec): min=7217, max=58095, avg=11171.51, stdev=5035.01 00:36:08.565 clat (usec): min=639, max=8278, avg=4668.74, stdev=809.37 00:36:08.565 lat (usec): min=651, max=8287, avg=4679.91, stdev=808.52 00:36:08.565 clat percentiles (usec): 00:36:08.565 | 1.00th=[ 3523], 5.00th=[ 3851], 10.00th=[ 3982], 20.00th=[ 4113], 00:36:08.565 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4424], 60.00th=[ 4490], 00:36:08.565 | 70.00th=[ 4686], 80.00th=[ 5014], 90.00th=[ 6259], 95.00th=[ 6521], 00:36:08.565 | 99.00th=[ 6915], 99.50th=[ 7177], 99.90th=[ 7504], 99.95th=[ 7570], 00:36:08.565 | 99.99th=[ 8291] 00:36:08.565 bw ( KiB/s): min=12752, max=14256, per=24.60%, avg=13557.33, stdev=441.45, samples=9 00:36:08.565 iops : min= 1594, max= 1782, avg=1694.67, stdev=55.18, samples=9 00:36:08.565 lat (usec) : 750=0.01% 00:36:08.565 lat (msec) : 4=10.16%, 10=89.83% 00:36:08.565 cpu : usr=95.22%, sys=4.34%, ctx=7, majf=0, minf=34 00:36:08.565 IO depths : 1=0.1%, 2=1.3%, 4=70.7%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:08.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.565 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:08.565 issued rwts: total=8507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:08.565 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:08.565 00:36:08.565 Run status group 0 (all jobs): 00:36:08.565 READ: bw=53.8MiB/s (56.4MB/s), 13.3MiB/s-13.6MiB/s (13.9MB/s-14.2MB/s), io=269MiB (282MB), run=5001-5004msec 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.565 00:36:08.565 real 0m23.989s 00:36:08.565 user 4m30.363s 00:36:08.565 sys 0m8.363s 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:08.565 01:21:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:08.565 ************************************ 00:36:08.565 END TEST fio_dif_rand_params 00:36:08.565 ************************************ 00:36:08.565 01:21:57 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:08.565 01:21:57 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:08.565 01:21:57 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:08.565 01:21:57 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:08.565 01:21:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:08.565 ************************************ 00:36:08.565 START TEST fio_dif_digest 00:36:08.565 ************************************ 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:08.565 bdev_null0 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:08.565 [2024-07-14 01:21:57.471565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:08.565 { 00:36:08.565 "params": { 00:36:08.565 "name": "Nvme$subsystem", 00:36:08.565 "trtype": "$TEST_TRANSPORT", 00:36:08.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:08.565 "adrfam": "ipv4", 00:36:08.565 "trsvcid": "$NVMF_PORT", 00:36:08.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:08.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:08.565 "hdgst": ${hdgst:-false}, 00:36:08.565 "ddgst": ${ddgst:-false} 00:36:08.565 }, 00:36:08.565 "method": "bdev_nvme_attach_controller" 00:36:08.565 } 00:36:08.565 EOF 00:36:08.565 )") 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:36:08.565 01:21:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:08.566 "params": { 00:36:08.566 "name": "Nvme0", 00:36:08.566 "trtype": "tcp", 00:36:08.566 "traddr": "10.0.0.2", 00:36:08.566 "adrfam": "ipv4", 00:36:08.566 "trsvcid": "4420", 00:36:08.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:08.566 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:08.566 "hdgst": true, 00:36:08.566 "ddgst": true 00:36:08.566 }, 00:36:08.566 "method": "bdev_nvme_attach_controller" 00:36:08.566 }' 00:36:08.566 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:08.566 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:08.566 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:08.566 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:08.566 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:08.566 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:08.566 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:08.566 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:08.566 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:08.566 01:21:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:08.566 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:08.566 ... 00:36:08.566 fio-3.35 00:36:08.566 Starting 3 threads 00:36:08.566 EAL: No free 2048 kB hugepages reported on node 1 00:36:20.797 00:36:20.797 filename0: (groupid=0, jobs=1): err= 0: pid=1316747: Sun Jul 14 01:22:08 2024 00:36:20.797 read: IOPS=192, BW=24.0MiB/s (25.2MB/s)(241MiB/10034msec) 00:36:20.797 slat (nsec): min=7637, max=96209, avg=19616.58, stdev=5426.19 00:36:20.797 clat (usec): min=9489, max=58370, avg=15594.59, stdev=3604.65 00:36:20.797 lat (usec): min=9510, max=58391, avg=15614.21, stdev=3605.03 00:36:20.797 clat percentiles (usec): 00:36:20.797 | 1.00th=[10683], 5.00th=[12518], 10.00th=[13304], 20.00th=[14091], 00:36:20.797 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15270], 60.00th=[15795], 00:36:20.797 | 70.00th=[16188], 80.00th=[16712], 90.00th=[17695], 95.00th=[18220], 00:36:20.797 | 99.00th=[19530], 99.50th=[54264], 99.90th=[57934], 99.95th=[58459], 00:36:20.797 | 99.99th=[58459] 00:36:20.797 bw ( KiB/s): min=22528, max=26880, per=34.49%, avg=24629.70, stdev=1634.28, samples=20 00:36:20.797 iops : min= 176, max= 210, avg=192.40, stdev=12.76, samples=20 00:36:20.797 lat (msec) : 10=0.26%, 20=98.91%, 50=0.26%, 100=0.57% 00:36:20.797 cpu : usr=93.50%, sys=5.92%, ctx=49, majf=0, minf=194 00:36:20.797 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.797 issued rwts: total=1927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.797 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:20.797 filename0: (groupid=0, jobs=1): err= 0: pid=1316748: Sun Jul 14 01:22:08 2024 00:36:20.797 read: IOPS=169, BW=21.2MiB/s (22.2MB/s)(213MiB/10047msec) 00:36:20.797 slat (nsec): min=7677, max=47470, avg=16433.30, stdev=4891.72 00:36:20.797 clat (usec): min=11052, max=63126, avg=17642.02, stdev=5501.62 00:36:20.797 lat (usec): min=11065, max=63146, avg=17658.46, stdev=5501.56 00:36:20.797 clat percentiles (usec): 00:36:20.797 | 1.00th=[12387], 5.00th=[14484], 10.00th=[15008], 20.00th=[15664], 00:36:20.797 | 30.00th=[16188], 40.00th=[16581], 50.00th=[16909], 60.00th=[17433], 00:36:20.797 | 70.00th=[17695], 80.00th=[18482], 90.00th=[19268], 95.00th=[19792], 00:36:20.797 | 99.00th=[57410], 99.50th=[58459], 99.90th=[60031], 99.95th=[63177], 00:36:20.797 | 99.99th=[63177] 00:36:20.797 bw ( KiB/s): min=19200, max=23599, per=30.49%, avg=21775.15, stdev=1359.12, samples=20 00:36:20.797 iops : min= 150, max= 184, avg=170.10, stdev=10.59, samples=20 00:36:20.797 lat (msec) : 20=95.31%, 50=3.05%, 100=1.64% 00:36:20.797 cpu : usr=93.24%, sys=6.30%, ctx=19, majf=0, minf=110 00:36:20.797 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.797 issued rwts: total=1704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.797 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:20.797 filename0: (groupid=0, jobs=1): err= 0: pid=1316749: Sun Jul 14 01:22:08 2024 00:36:20.797 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(247MiB/10046msec) 00:36:20.797 slat (nsec): min=7456, max=49183, avg=16275.09, stdev=4884.44 00:36:20.797 clat (usec): min=8177, max=53236, avg=15225.40, stdev=2207.80 00:36:20.797 lat (usec): min=8198, max=53249, avg=15241.68, stdev=2207.73 00:36:20.797 clat percentiles (usec): 00:36:20.797 | 1.00th=[10028], 5.00th=[11863], 10.00th=[13042], 20.00th=[13829], 00:36:20.797 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15139], 60.00th=[15533], 00:36:20.797 | 70.00th=[16057], 80.00th=[16909], 90.00th=[17695], 95.00th=[18220], 00:36:20.797 | 99.00th=[19530], 99.50th=[19792], 99.90th=[46924], 99.95th=[53216], 00:36:20.797 | 99.99th=[53216] 00:36:20.797 bw ( KiB/s): min=22016, max=27904, per=35.35%, avg=25241.60, stdev=1793.73, samples=20 00:36:20.797 iops : min= 172, max= 218, avg=197.20, stdev=14.01, samples=20 00:36:20.797 lat (msec) : 10=1.11%, 20=98.48%, 50=0.35%, 100=0.05% 00:36:20.797 cpu : usr=92.45%, sys=7.07%, ctx=17, majf=0, minf=164 00:36:20.797 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:20.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:20.797 issued rwts: total=1974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:20.797 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:20.797 00:36:20.797 Run status group 0 (all jobs): 00:36:20.797 READ: bw=69.7MiB/s (73.1MB/s), 21.2MiB/s-24.6MiB/s (22.2MB/s-25.8MB/s), io=701MiB (735MB), run=10034-10047msec 00:36:20.797 01:22:08 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:20.797 01:22:08 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:20.797 01:22:08 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:20.797 01:22:08 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:20.797 01:22:08 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:20.797 01:22:08 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:20.797 01:22:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.797 01:22:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:20.797 01:22:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.797 01:22:08 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:20.797 01:22:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.797 01:22:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:20.797 01:22:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.797 00:36:20.797 real 0m11.143s 00:36:20.797 user 0m29.133s 00:36:20.797 sys 0m2.236s 00:36:20.797 01:22:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:20.797 01:22:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:20.797 ************************************ 00:36:20.797 END TEST fio_dif_digest 00:36:20.797 ************************************ 00:36:20.797 01:22:08 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:20.797 01:22:08 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:20.797 01:22:08 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:20.797 01:22:08 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:20.797 01:22:08 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:20.797 01:22:08 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:20.797 01:22:08 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:20.797 01:22:08 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:20.797 01:22:08 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:20.797 rmmod nvme_tcp 00:36:20.797 rmmod nvme_fabrics 00:36:20.797 rmmod nvme_keyring 00:36:20.797 01:22:08 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:20.797 01:22:08 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:20.797 01:22:08 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:20.797 01:22:08 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1310703 ']' 00:36:20.797 01:22:08 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1310703 00:36:20.797 01:22:08 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1310703 ']' 00:36:20.797 01:22:08 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1310703 00:36:20.797 01:22:08 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:36:20.798 01:22:08 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:20.798 01:22:08 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1310703 00:36:20.798 01:22:08 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:20.798 01:22:08 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:20.798 01:22:08 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1310703' 00:36:20.798 killing process with pid 1310703 00:36:20.798 01:22:08 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1310703 00:36:20.798 01:22:08 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1310703 00:36:20.798 01:22:08 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:20.798 01:22:08 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:20.798 Waiting for block devices as requested 00:36:20.798 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:20.798 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:20.798 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:21.056 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:21.056 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:21.056 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:21.056 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:21.315 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:21.315 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:21.315 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:21.315 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:21.575 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:21.575 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:21.575 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:21.834 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:21.834 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:21.834 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:22.094 01:22:11 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:22.094 01:22:11 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:22.094 01:22:11 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:22.094 01:22:11 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:22.094 01:22:11 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:22.094 01:22:11 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:22.094 01:22:11 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.001 01:22:13 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:24.001 00:36:24.001 real 1m6.142s 00:36:24.001 user 6m26.189s 00:36:24.001 sys 0m19.840s 00:36:24.001 01:22:13 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:24.001 01:22:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:24.001 ************************************ 00:36:24.001 END TEST nvmf_dif 00:36:24.001 ************************************ 00:36:24.001 01:22:13 -- common/autotest_common.sh@1142 -- # return 0 00:36:24.001 01:22:13 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:24.001 01:22:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:24.001 01:22:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:24.001 01:22:13 -- common/autotest_common.sh@10 -- # set +x 00:36:24.001 ************************************ 00:36:24.001 START TEST nvmf_abort_qd_sizes 00:36:24.001 ************************************ 00:36:24.001 01:22:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:24.001 * Looking for test storage... 00:36:24.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:24.001 01:22:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:24.001 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:24.259 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:24.260 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.260 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.260 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:24.260 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:24.260 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:24.260 01:22:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:24.260 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:24.260 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:24.260 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:24.260 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:24.260 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:24.260 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.260 01:22:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:24.260 01:22:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.260 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:24.260 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:24.260 01:22:13 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:24.260 01:22:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:26.161 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:26.161 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:26.161 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:26.161 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:26.161 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:26.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:26.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:36:26.162 00:36:26.162 --- 10.0.0.2 ping statistics --- 00:36:26.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.162 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:26.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:26.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:36:26.162 00:36:26.162 --- 10.0.0.1 ping statistics --- 00:36:26.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.162 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:26.162 01:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:27.539 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:27.539 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:27.539 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:27.539 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:27.539 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:27.539 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:27.539 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:27.539 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:27.539 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:27.539 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:27.539 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:27.539 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:27.539 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:27.539 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:27.539 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:27.539 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:28.478 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1321524 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1321524 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1321524 ']' 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:28.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:28.478 01:22:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:28.478 [2024-07-14 01:22:17.830298] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:36:28.478 [2024-07-14 01:22:17.830383] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:28.478 EAL: No free 2048 kB hugepages reported on node 1 00:36:28.735 [2024-07-14 01:22:17.900701] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:28.736 [2024-07-14 01:22:17.993632] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:28.736 [2024-07-14 01:22:17.993707] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:28.736 [2024-07-14 01:22:17.993723] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:28.736 [2024-07-14 01:22:17.993736] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:28.736 [2024-07-14 01:22:17.993748] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:28.736 [2024-07-14 01:22:17.993830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:28.736 [2024-07-14 01:22:17.993890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:28.736 [2024-07-14 01:22:17.993935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:28.736 [2024-07-14 01:22:17.993937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:28.736 01:22:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:28.736 01:22:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:36:28.736 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:28.736 01:22:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:28.736 01:22:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:28.736 01:22:18 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:29.000 01:22:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:29.000 ************************************ 00:36:29.000 START TEST spdk_target_abort 00:36:29.000 ************************************ 00:36:29.000 01:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:36:29.000 01:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:29.000 01:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:29.000 01:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:29.000 01:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:32.290 spdk_targetn1 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:32.290 [2024-07-14 01:22:21.013368] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:32.290 [2024-07-14 01:22:21.045598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:32.290 01:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:32.290 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.820 Initializing NVMe Controllers 00:36:34.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:34.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:34.820 Initialization complete. Launching workers. 00:36:34.820 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10656, failed: 0 00:36:34.820 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1274, failed to submit 9382 00:36:34.820 success 837, unsuccess 437, failed 0 00:36:35.077 01:22:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:35.077 01:22:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:35.077 EAL: No free 2048 kB hugepages reported on node 1 00:36:38.393 Initializing NVMe Controllers 00:36:38.393 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:38.393 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:38.393 Initialization complete. Launching workers. 00:36:38.393 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8526, failed: 0 00:36:38.393 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1252, failed to submit 7274 00:36:38.393 success 290, unsuccess 962, failed 0 00:36:38.393 01:22:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:38.393 01:22:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:38.393 EAL: No free 2048 kB hugepages reported on node 1 00:36:41.692 Initializing NVMe Controllers 00:36:41.692 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:41.692 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:41.692 Initialization complete. Launching workers. 00:36:41.692 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30452, failed: 0 00:36:41.692 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2682, failed to submit 27770 00:36:41.692 success 521, unsuccess 2161, failed 0 00:36:41.692 01:22:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:41.692 01:22:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.692 01:22:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:41.692 01:22:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.692 01:22:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:41.692 01:22:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.692 01:22:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1321524 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1321524 ']' 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1321524 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1321524 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1321524' 00:36:43.079 killing process with pid 1321524 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1321524 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1321524 00:36:43.079 00:36:43.079 real 0m14.165s 00:36:43.079 user 0m53.364s 00:36:43.079 sys 0m2.822s 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.079 ************************************ 00:36:43.079 END TEST spdk_target_abort 00:36:43.079 ************************************ 00:36:43.079 01:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:43.079 01:22:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:43.079 01:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:43.079 01:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:43.079 01:22:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:43.079 ************************************ 00:36:43.079 START TEST kernel_target_abort 00:36:43.079 ************************************ 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:43.079 01:22:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:44.011 Waiting for block devices as requested 00:36:44.268 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:44.268 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:44.268 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:44.526 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:44.526 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:44.526 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:44.785 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:44.785 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:44.785 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:44.785 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:45.044 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:45.044 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:45.044 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:45.044 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:45.303 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:45.303 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:45.303 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:45.562 No valid GPT data, bailing 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:45.562 00:36:45.562 Discovery Log Number of Records 2, Generation counter 2 00:36:45.562 =====Discovery Log Entry 0====== 00:36:45.562 trtype: tcp 00:36:45.562 adrfam: ipv4 00:36:45.562 subtype: current discovery subsystem 00:36:45.562 treq: not specified, sq flow control disable supported 00:36:45.562 portid: 1 00:36:45.562 trsvcid: 4420 00:36:45.562 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:45.562 traddr: 10.0.0.1 00:36:45.562 eflags: none 00:36:45.562 sectype: none 00:36:45.562 =====Discovery Log Entry 1====== 00:36:45.562 trtype: tcp 00:36:45.562 adrfam: ipv4 00:36:45.562 subtype: nvme subsystem 00:36:45.562 treq: not specified, sq flow control disable supported 00:36:45.562 portid: 1 00:36:45.562 trsvcid: 4420 00:36:45.562 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:45.562 traddr: 10.0.0.1 00:36:45.562 eflags: none 00:36:45.562 sectype: none 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:45.562 01:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:45.562 EAL: No free 2048 kB hugepages reported on node 1 00:36:48.849 Initializing NVMe Controllers 00:36:48.849 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:48.850 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:48.850 Initialization complete. Launching workers. 00:36:48.850 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29590, failed: 0 00:36:48.850 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29590, failed to submit 0 00:36:48.850 success 0, unsuccess 29590, failed 0 00:36:48.850 01:22:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:48.850 01:22:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:48.850 EAL: No free 2048 kB hugepages reported on node 1 00:36:52.130 Initializing NVMe Controllers 00:36:52.130 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:52.130 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:52.130 Initialization complete. Launching workers. 00:36:52.130 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57340, failed: 0 00:36:52.130 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14434, failed to submit 42906 00:36:52.130 success 0, unsuccess 14434, failed 0 00:36:52.130 01:22:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:52.130 01:22:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:52.130 EAL: No free 2048 kB hugepages reported on node 1 00:36:55.412 Initializing NVMe Controllers 00:36:55.412 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:55.412 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:55.412 Initialization complete. Launching workers. 00:36:55.412 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56071, failed: 0 00:36:55.412 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13978, failed to submit 42093 00:36:55.412 success 0, unsuccess 13978, failed 0 00:36:55.412 01:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:55.412 01:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:55.412 01:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:55.412 01:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:55.412 01:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:55.412 01:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:55.412 01:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:55.412 01:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:55.412 01:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:55.412 01:22:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:56.347 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:56.347 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:56.347 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:56.347 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:56.347 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:56.347 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:56.347 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:56.347 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:56.347 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:56.347 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:56.347 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:56.347 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:56.347 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:56.347 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:56.347 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:56.347 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:57.285 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:57.285 00:36:57.285 real 0m14.298s 00:36:57.285 user 0m4.665s 00:36:57.285 sys 0m3.429s 00:36:57.285 01:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:57.285 01:22:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:57.285 ************************************ 00:36:57.285 END TEST kernel_target_abort 00:36:57.285 ************************************ 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:57.544 rmmod nvme_tcp 00:36:57.544 rmmod nvme_fabrics 00:36:57.544 rmmod nvme_keyring 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1321524 ']' 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1321524 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1321524 ']' 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1321524 00:36:57.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1321524) - No such process 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1321524 is not found' 00:36:57.544 Process with pid 1321524 is not found 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:57.544 01:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:58.511 Waiting for block devices as requested 00:36:58.776 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:58.776 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:58.776 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:59.034 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:59.034 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:59.034 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:59.292 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:59.292 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:59.292 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:59.292 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:59.292 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:59.550 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:59.550 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:59.550 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:59.550 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:59.808 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:59.809 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:59.809 01:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:59.809 01:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:59.809 01:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:59.809 01:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:59.809 01:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:59.809 01:22:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:59.809 01:22:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:02.349 01:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:02.349 00:37:02.349 real 0m37.843s 00:37:02.349 user 1m0.140s 00:37:02.349 sys 0m9.559s 00:37:02.349 01:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:02.349 01:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:02.349 ************************************ 00:37:02.349 END TEST nvmf_abort_qd_sizes 00:37:02.349 ************************************ 00:37:02.349 01:22:51 -- common/autotest_common.sh@1142 -- # return 0 00:37:02.349 01:22:51 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:02.349 01:22:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:02.349 01:22:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:02.349 01:22:51 -- common/autotest_common.sh@10 -- # set +x 00:37:02.349 ************************************ 00:37:02.349 START TEST keyring_file 00:37:02.349 ************************************ 00:37:02.349 01:22:51 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:02.349 * Looking for test storage... 00:37:02.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:02.349 01:22:51 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:02.349 01:22:51 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:02.349 01:22:51 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:02.349 01:22:51 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:02.349 01:22:51 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.349 01:22:51 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.349 01:22:51 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.349 01:22:51 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:02.349 01:22:51 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@47 -- # : 0 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:02.349 01:22:51 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:02.349 01:22:51 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:02.349 01:22:51 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:02.349 01:22:51 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:02.349 01:22:51 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:02.349 01:22:51 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lvL7LXsy2e 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lvL7LXsy2e 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lvL7LXsy2e 00:37:02.349 01:22:51 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.lvL7LXsy2e 00:37:02.349 01:22:51 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TFqZjFMUaI 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:02.349 01:22:51 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TFqZjFMUaI 00:37:02.349 01:22:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TFqZjFMUaI 00:37:02.350 01:22:51 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.TFqZjFMUaI 00:37:02.350 01:22:51 keyring_file -- keyring/file.sh@30 -- # tgtpid=1327276 00:37:02.350 01:22:51 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:02.350 01:22:51 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1327276 00:37:02.350 01:22:51 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1327276 ']' 00:37:02.350 01:22:51 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:02.350 01:22:51 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:02.350 01:22:51 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:02.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:02.350 01:22:51 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:02.350 01:22:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:02.350 [2024-07-14 01:22:51.451323] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:37:02.350 [2024-07-14 01:22:51.451417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327276 ] 00:37:02.350 EAL: No free 2048 kB hugepages reported on node 1 00:37:02.350 [2024-07-14 01:22:51.513155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.350 [2024-07-14 01:22:51.604349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:02.608 01:22:51 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:02.608 [2024-07-14 01:22:51.865548] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:02.608 null0 00:37:02.608 [2024-07-14 01:22:51.897600] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:02.608 [2024-07-14 01:22:51.898091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:02.608 [2024-07-14 01:22:51.905609] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:02.608 01:22:51 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:02.608 [2024-07-14 01:22:51.913619] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:02.608 request: 00:37:02.608 { 00:37:02.608 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:02.608 "secure_channel": false, 00:37:02.608 "listen_address": { 00:37:02.608 "trtype": "tcp", 00:37:02.608 "traddr": "127.0.0.1", 00:37:02.608 "trsvcid": "4420" 00:37:02.608 }, 00:37:02.608 "method": "nvmf_subsystem_add_listener", 00:37:02.608 "req_id": 1 00:37:02.608 } 00:37:02.608 Got JSON-RPC error response 00:37:02.608 response: 00:37:02.608 { 00:37:02.608 "code": -32602, 00:37:02.608 "message": "Invalid parameters" 00:37:02.608 } 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:02.608 01:22:51 keyring_file -- keyring/file.sh@46 -- # bperfpid=1327289 00:37:02.608 01:22:51 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:02.608 01:22:51 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1327289 /var/tmp/bperf.sock 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1327289 ']' 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:02.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:02.608 01:22:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:02.608 [2024-07-14 01:22:51.959586] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:37:02.608 [2024-07-14 01:22:51.959648] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327289 ] 00:37:02.608 EAL: No free 2048 kB hugepages reported on node 1 00:37:02.608 [2024-07-14 01:22:52.019589] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.866 [2024-07-14 01:22:52.110440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:02.866 01:22:52 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:02.866 01:22:52 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:02.866 01:22:52 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lvL7LXsy2e 00:37:02.866 01:22:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lvL7LXsy2e 00:37:03.124 01:22:52 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TFqZjFMUaI 00:37:03.124 01:22:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TFqZjFMUaI 00:37:03.383 01:22:52 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:37:03.383 01:22:52 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:37:03.383 01:22:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:03.383 01:22:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:03.383 01:22:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:03.641 01:22:52 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.lvL7LXsy2e == \/\t\m\p\/\t\m\p\.\l\v\L\7\L\X\s\y\2\e ]] 00:37:03.641 01:22:52 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:37:03.641 01:22:52 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:03.641 01:22:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:03.641 01:22:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:03.641 01:22:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:03.900 01:22:53 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.TFqZjFMUaI == \/\t\m\p\/\t\m\p\.\T\F\q\Z\j\F\M\U\a\I ]] 00:37:03.900 01:22:53 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:37:03.900 01:22:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:03.900 01:22:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:03.900 01:22:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:03.900 01:22:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:03.900 01:22:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:04.158 01:22:53 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:37:04.158 01:22:53 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:37:04.158 01:22:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:04.158 01:22:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:04.158 01:22:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:04.158 01:22:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:04.158 01:22:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:04.416 01:22:53 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:04.416 01:22:53 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:04.416 01:22:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:04.674 [2024-07-14 01:22:53.976353] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:04.674 nvme0n1 00:37:04.674 01:22:54 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:37:04.674 01:22:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:04.674 01:22:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:04.674 01:22:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:04.674 01:22:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:04.674 01:22:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:04.932 01:22:54 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:04.932 01:22:54 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:37:04.932 01:22:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:04.932 01:22:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:04.932 01:22:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:04.932 01:22:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:04.932 01:22:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:05.191 01:22:54 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:05.191 01:22:54 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:05.450 Running I/O for 1 seconds... 00:37:06.388 00:37:06.388 Latency(us) 00:37:06.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:06.388 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:06.388 nvme0n1 : 1.02 4591.03 17.93 0.00 0.00 27571.34 11505.21 41748.86 00:37:06.388 =================================================================================================================== 00:37:06.388 Total : 4591.03 17.93 0.00 0.00 27571.34 11505.21 41748.86 00:37:06.388 0 00:37:06.388 01:22:55 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:06.388 01:22:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:06.645 01:22:55 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:37:06.645 01:22:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:06.645 01:22:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:06.645 01:22:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:06.645 01:22:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:06.645 01:22:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:06.903 01:22:56 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:06.903 01:22:56 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:37:06.903 01:22:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:06.903 01:22:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:06.903 01:22:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:06.903 01:22:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:06.903 01:22:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:07.161 01:22:56 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:07.161 01:22:56 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:07.161 01:22:56 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:07.161 01:22:56 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:07.161 01:22:56 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:07.161 01:22:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:07.161 01:22:56 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:07.161 01:22:56 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:07.161 01:22:56 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:07.161 01:22:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:07.418 [2024-07-14 01:22:56.677305] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:07.418 [2024-07-14 01:22:56.677931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1782710 (107): Transport endpoint is not connected 00:37:07.418 [2024-07-14 01:22:56.678938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1782710 (9): Bad file descriptor 00:37:07.418 [2024-07-14 01:22:56.679922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:07.418 [2024-07-14 01:22:56.679943] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:07.418 [2024-07-14 01:22:56.679957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:07.418 request: 00:37:07.418 { 00:37:07.418 "name": "nvme0", 00:37:07.418 "trtype": "tcp", 00:37:07.418 "traddr": "127.0.0.1", 00:37:07.418 "adrfam": "ipv4", 00:37:07.418 "trsvcid": "4420", 00:37:07.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:07.418 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:07.418 "prchk_reftag": false, 00:37:07.418 "prchk_guard": false, 00:37:07.418 "hdgst": false, 00:37:07.418 "ddgst": false, 00:37:07.418 "psk": "key1", 00:37:07.418 "method": "bdev_nvme_attach_controller", 00:37:07.418 "req_id": 1 00:37:07.418 } 00:37:07.418 Got JSON-RPC error response 00:37:07.418 response: 00:37:07.418 { 00:37:07.418 "code": -5, 00:37:07.418 "message": "Input/output error" 00:37:07.418 } 00:37:07.418 01:22:56 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:07.418 01:22:56 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:07.419 01:22:56 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:07.419 01:22:56 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:07.419 01:22:56 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:37:07.419 01:22:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:07.419 01:22:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:07.419 01:22:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:07.419 01:22:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:07.419 01:22:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:07.677 01:22:56 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:07.677 01:22:56 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:37:07.677 01:22:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:07.677 01:22:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:07.677 01:22:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:07.677 01:22:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:07.677 01:22:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:07.935 01:22:57 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:07.935 01:22:57 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:07.935 01:22:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:08.193 01:22:57 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:08.193 01:22:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:08.451 01:22:57 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:08.451 01:22:57 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:08.451 01:22:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:08.709 01:22:57 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:08.709 01:22:57 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.lvL7LXsy2e 00:37:08.709 01:22:57 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.lvL7LXsy2e 00:37:08.709 01:22:57 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:08.709 01:22:57 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.lvL7LXsy2e 00:37:08.709 01:22:57 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:08.709 01:22:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:08.709 01:22:57 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:08.709 01:22:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:08.709 01:22:57 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lvL7LXsy2e 00:37:08.709 01:22:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lvL7LXsy2e 00:37:08.967 [2024-07-14 01:22:58.184770] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lvL7LXsy2e': 0100660 00:37:08.967 [2024-07-14 01:22:58.184807] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:08.967 request: 00:37:08.967 { 00:37:08.967 "name": "key0", 00:37:08.967 "path": "/tmp/tmp.lvL7LXsy2e", 00:37:08.967 "method": "keyring_file_add_key", 00:37:08.967 "req_id": 1 00:37:08.967 } 00:37:08.967 Got JSON-RPC error response 00:37:08.967 response: 00:37:08.967 { 00:37:08.967 "code": -1, 00:37:08.967 "message": "Operation not permitted" 00:37:08.967 } 00:37:08.967 01:22:58 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:08.967 01:22:58 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:08.967 01:22:58 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:08.967 01:22:58 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:08.967 01:22:58 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.lvL7LXsy2e 00:37:08.967 01:22:58 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lvL7LXsy2e 00:37:08.967 01:22:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lvL7LXsy2e 00:37:09.225 01:22:58 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.lvL7LXsy2e 00:37:09.225 01:22:58 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:09.225 01:22:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:09.225 01:22:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:09.225 01:22:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:09.225 01:22:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:09.225 01:22:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:09.483 01:22:58 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:09.483 01:22:58 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:09.483 01:22:58 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:09.483 01:22:58 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:09.483 01:22:58 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:09.483 01:22:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:09.483 01:22:58 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:09.483 01:22:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:09.483 01:22:58 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:09.483 01:22:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:09.742 [2024-07-14 01:22:58.926824] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.lvL7LXsy2e': No such file or directory 00:37:09.742 [2024-07-14 01:22:58.926863] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:09.742 [2024-07-14 01:22:58.926926] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:09.742 [2024-07-14 01:22:58.926939] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:09.742 [2024-07-14 01:22:58.926951] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:09.742 request: 00:37:09.742 { 00:37:09.742 "name": "nvme0", 00:37:09.742 "trtype": "tcp", 00:37:09.742 "traddr": "127.0.0.1", 00:37:09.742 "adrfam": "ipv4", 00:37:09.742 "trsvcid": "4420", 00:37:09.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:09.742 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:09.742 "prchk_reftag": false, 00:37:09.742 "prchk_guard": false, 00:37:09.742 "hdgst": false, 00:37:09.742 "ddgst": false, 00:37:09.742 "psk": "key0", 00:37:09.742 "method": "bdev_nvme_attach_controller", 00:37:09.742 "req_id": 1 00:37:09.742 } 00:37:09.742 Got JSON-RPC error response 00:37:09.742 response: 00:37:09.742 { 00:37:09.742 "code": -19, 00:37:09.742 "message": "No such device" 00:37:09.742 } 00:37:09.742 01:22:58 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:09.742 01:22:58 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:09.742 01:22:58 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:09.742 01:22:58 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:09.742 01:22:58 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:09.742 01:22:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:10.000 01:22:59 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:10.000 01:22:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:10.000 01:22:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:10.000 01:22:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:10.000 01:22:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:10.000 01:22:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:10.000 01:22:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bN0mnA2BOw 00:37:10.000 01:22:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:10.000 01:22:59 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:10.000 01:22:59 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:10.000 01:22:59 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:10.000 01:22:59 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:10.000 01:22:59 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:10.000 01:22:59 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:10.000 01:22:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bN0mnA2BOw 00:37:10.000 01:22:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bN0mnA2BOw 00:37:10.000 01:22:59 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.bN0mnA2BOw 00:37:10.000 01:22:59 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bN0mnA2BOw 00:37:10.000 01:22:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bN0mnA2BOw 00:37:10.258 01:22:59 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:10.258 01:22:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:10.517 nvme0n1 00:37:10.517 01:22:59 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:10.517 01:22:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:10.517 01:22:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:10.517 01:22:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:10.517 01:22:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:10.517 01:22:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:10.775 01:23:00 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:10.775 01:23:00 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:10.775 01:23:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:11.034 01:23:00 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:11.034 01:23:00 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:11.034 01:23:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:11.034 01:23:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:11.034 01:23:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:11.293 01:23:00 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:11.293 01:23:00 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:11.293 01:23:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:11.293 01:23:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:11.293 01:23:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:11.293 01:23:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:11.293 01:23:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:11.550 01:23:00 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:11.550 01:23:00 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:11.550 01:23:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:11.808 01:23:01 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:11.808 01:23:01 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:11.808 01:23:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:12.066 01:23:01 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:12.066 01:23:01 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bN0mnA2BOw 00:37:12.066 01:23:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bN0mnA2BOw 00:37:12.324 01:23:01 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TFqZjFMUaI 00:37:12.324 01:23:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TFqZjFMUaI 00:37:12.581 01:23:01 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:12.581 01:23:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:12.840 nvme0n1 00:37:12.840 01:23:02 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:12.840 01:23:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:13.134 01:23:02 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:13.134 "subsystems": [ 00:37:13.134 { 00:37:13.134 "subsystem": "keyring", 00:37:13.134 "config": [ 00:37:13.134 { 00:37:13.134 "method": "keyring_file_add_key", 00:37:13.134 "params": { 00:37:13.134 "name": "key0", 00:37:13.134 "path": "/tmp/tmp.bN0mnA2BOw" 00:37:13.134 } 00:37:13.134 }, 00:37:13.134 { 00:37:13.134 "method": "keyring_file_add_key", 00:37:13.134 "params": { 00:37:13.134 "name": "key1", 00:37:13.134 "path": "/tmp/tmp.TFqZjFMUaI" 00:37:13.134 } 00:37:13.134 } 00:37:13.134 ] 00:37:13.134 }, 00:37:13.134 { 00:37:13.134 "subsystem": "iobuf", 00:37:13.134 "config": [ 00:37:13.134 { 00:37:13.134 "method": "iobuf_set_options", 00:37:13.134 "params": { 00:37:13.134 "small_pool_count": 8192, 00:37:13.134 "large_pool_count": 1024, 00:37:13.134 "small_bufsize": 8192, 00:37:13.134 "large_bufsize": 135168 00:37:13.134 } 00:37:13.134 } 00:37:13.134 ] 00:37:13.134 }, 00:37:13.134 { 00:37:13.134 "subsystem": "sock", 00:37:13.134 "config": [ 00:37:13.134 { 00:37:13.134 "method": "sock_set_default_impl", 00:37:13.134 "params": { 00:37:13.134 "impl_name": "posix" 00:37:13.134 } 00:37:13.134 }, 00:37:13.134 { 00:37:13.134 "method": "sock_impl_set_options", 00:37:13.134 "params": { 00:37:13.134 "impl_name": "ssl", 00:37:13.134 "recv_buf_size": 4096, 00:37:13.134 "send_buf_size": 4096, 00:37:13.134 "enable_recv_pipe": true, 00:37:13.134 "enable_quickack": false, 00:37:13.134 "enable_placement_id": 0, 00:37:13.134 "enable_zerocopy_send_server": true, 00:37:13.134 "enable_zerocopy_send_client": false, 00:37:13.134 "zerocopy_threshold": 0, 00:37:13.134 "tls_version": 0, 00:37:13.134 "enable_ktls": false 00:37:13.134 } 00:37:13.134 }, 00:37:13.134 { 00:37:13.134 "method": "sock_impl_set_options", 00:37:13.134 "params": { 00:37:13.134 "impl_name": "posix", 00:37:13.134 "recv_buf_size": 2097152, 00:37:13.134 "send_buf_size": 2097152, 00:37:13.134 "enable_recv_pipe": true, 00:37:13.134 "enable_quickack": false, 00:37:13.134 "enable_placement_id": 0, 00:37:13.134 "enable_zerocopy_send_server": true, 00:37:13.134 "enable_zerocopy_send_client": false, 00:37:13.134 "zerocopy_threshold": 0, 00:37:13.134 "tls_version": 0, 00:37:13.134 "enable_ktls": false 00:37:13.134 } 00:37:13.134 } 00:37:13.134 ] 00:37:13.134 }, 00:37:13.134 { 00:37:13.134 "subsystem": "vmd", 00:37:13.134 "config": [] 00:37:13.134 }, 00:37:13.134 { 00:37:13.134 "subsystem": "accel", 00:37:13.134 "config": [ 00:37:13.134 { 00:37:13.134 "method": "accel_set_options", 00:37:13.134 "params": { 00:37:13.134 "small_cache_size": 128, 00:37:13.134 "large_cache_size": 16, 00:37:13.134 "task_count": 2048, 00:37:13.134 "sequence_count": 2048, 00:37:13.134 "buf_count": 2048 00:37:13.134 } 00:37:13.134 } 00:37:13.134 ] 00:37:13.134 }, 00:37:13.134 { 00:37:13.134 "subsystem": "bdev", 00:37:13.134 "config": [ 00:37:13.134 { 00:37:13.134 "method": "bdev_set_options", 00:37:13.134 "params": { 00:37:13.134 "bdev_io_pool_size": 65535, 00:37:13.134 "bdev_io_cache_size": 256, 00:37:13.134 "bdev_auto_examine": true, 00:37:13.134 "iobuf_small_cache_size": 128, 00:37:13.134 "iobuf_large_cache_size": 16 00:37:13.134 } 00:37:13.134 }, 00:37:13.134 { 00:37:13.134 "method": "bdev_raid_set_options", 00:37:13.134 "params": { 00:37:13.134 "process_window_size_kb": 1024 00:37:13.134 } 00:37:13.134 }, 00:37:13.134 { 00:37:13.134 "method": "bdev_iscsi_set_options", 00:37:13.134 "params": { 00:37:13.134 "timeout_sec": 30 00:37:13.134 } 00:37:13.134 }, 00:37:13.134 { 00:37:13.134 "method": "bdev_nvme_set_options", 00:37:13.134 "params": { 00:37:13.134 "action_on_timeout": "none", 00:37:13.134 "timeout_us": 0, 00:37:13.134 "timeout_admin_us": 0, 00:37:13.134 "keep_alive_timeout_ms": 10000, 00:37:13.134 "arbitration_burst": 0, 00:37:13.134 "low_priority_weight": 0, 00:37:13.134 "medium_priority_weight": 0, 00:37:13.134 "high_priority_weight": 0, 00:37:13.134 "nvme_adminq_poll_period_us": 10000, 00:37:13.134 "nvme_ioq_poll_period_us": 0, 00:37:13.134 "io_queue_requests": 512, 00:37:13.134 "delay_cmd_submit": true, 00:37:13.134 "transport_retry_count": 4, 00:37:13.134 "bdev_retry_count": 3, 00:37:13.134 "transport_ack_timeout": 0, 00:37:13.134 "ctrlr_loss_timeout_sec": 0, 00:37:13.134 "reconnect_delay_sec": 0, 00:37:13.134 "fast_io_fail_timeout_sec": 0, 00:37:13.134 "disable_auto_failback": false, 00:37:13.134 "generate_uuids": false, 00:37:13.134 "transport_tos": 0, 00:37:13.134 "nvme_error_stat": false, 00:37:13.134 "rdma_srq_size": 0, 00:37:13.134 "io_path_stat": false, 00:37:13.134 "allow_accel_sequence": false, 00:37:13.134 "rdma_max_cq_size": 0, 00:37:13.134 "rdma_cm_event_timeout_ms": 0, 00:37:13.134 "dhchap_digests": [ 00:37:13.134 "sha256", 00:37:13.134 "sha384", 00:37:13.134 "sha512" 00:37:13.134 ], 00:37:13.134 "dhchap_dhgroups": [ 00:37:13.134 "null", 00:37:13.134 "ffdhe2048", 00:37:13.134 "ffdhe3072", 00:37:13.134 "ffdhe4096", 00:37:13.134 "ffdhe6144", 00:37:13.134 "ffdhe8192" 00:37:13.134 ] 00:37:13.134 } 00:37:13.134 }, 00:37:13.134 { 00:37:13.134 "method": "bdev_nvme_attach_controller", 00:37:13.134 "params": { 00:37:13.134 "name": "nvme0", 00:37:13.134 "trtype": "TCP", 00:37:13.134 "adrfam": "IPv4", 00:37:13.134 "traddr": "127.0.0.1", 00:37:13.134 "trsvcid": "4420", 00:37:13.134 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:13.134 "prchk_reftag": false, 00:37:13.134 "prchk_guard": false, 00:37:13.134 "ctrlr_loss_timeout_sec": 0, 00:37:13.134 "reconnect_delay_sec": 0, 00:37:13.134 "fast_io_fail_timeout_sec": 0, 00:37:13.134 "psk": "key0", 00:37:13.134 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:13.134 "hdgst": false, 00:37:13.134 "ddgst": false 00:37:13.134 } 00:37:13.134 }, 00:37:13.134 { 00:37:13.134 "method": "bdev_nvme_set_hotplug", 00:37:13.134 "params": { 00:37:13.134 "period_us": 100000, 00:37:13.134 "enable": false 00:37:13.134 } 00:37:13.134 }, 00:37:13.134 { 00:37:13.134 "method": "bdev_wait_for_examine" 00:37:13.134 } 00:37:13.134 ] 00:37:13.134 }, 00:37:13.134 { 00:37:13.134 "subsystem": "nbd", 00:37:13.134 "config": [] 00:37:13.134 } 00:37:13.134 ] 00:37:13.134 }' 00:37:13.134 01:23:02 keyring_file -- keyring/file.sh@114 -- # killprocess 1327289 00:37:13.134 01:23:02 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1327289 ']' 00:37:13.134 01:23:02 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1327289 00:37:13.134 01:23:02 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:13.134 01:23:02 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:13.134 01:23:02 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1327289 00:37:13.134 01:23:02 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:13.134 01:23:02 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:13.134 01:23:02 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1327289' 00:37:13.134 killing process with pid 1327289 00:37:13.134 01:23:02 keyring_file -- common/autotest_common.sh@967 -- # kill 1327289 00:37:13.134 Received shutdown signal, test time was about 1.000000 seconds 00:37:13.134 00:37:13.134 Latency(us) 00:37:13.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:13.134 =================================================================================================================== 00:37:13.134 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:13.134 01:23:02 keyring_file -- common/autotest_common.sh@972 -- # wait 1327289 00:37:13.392 01:23:02 keyring_file -- keyring/file.sh@117 -- # bperfpid=1328742 00:37:13.392 01:23:02 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1328742 /var/tmp/bperf.sock 00:37:13.392 01:23:02 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1328742 ']' 00:37:13.392 01:23:02 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:13.392 01:23:02 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:13.392 01:23:02 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:13.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:13.392 01:23:02 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:13.392 01:23:02 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:13.392 01:23:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:13.392 01:23:02 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:13.392 "subsystems": [ 00:37:13.392 { 00:37:13.392 "subsystem": "keyring", 00:37:13.392 "config": [ 00:37:13.392 { 00:37:13.392 "method": "keyring_file_add_key", 00:37:13.392 "params": { 00:37:13.392 "name": "key0", 00:37:13.392 "path": "/tmp/tmp.bN0mnA2BOw" 00:37:13.392 } 00:37:13.392 }, 00:37:13.392 { 00:37:13.392 "method": "keyring_file_add_key", 00:37:13.392 "params": { 00:37:13.392 "name": "key1", 00:37:13.392 "path": "/tmp/tmp.TFqZjFMUaI" 00:37:13.392 } 00:37:13.392 } 00:37:13.392 ] 00:37:13.392 }, 00:37:13.392 { 00:37:13.392 "subsystem": "iobuf", 00:37:13.392 "config": [ 00:37:13.392 { 00:37:13.392 "method": "iobuf_set_options", 00:37:13.392 "params": { 00:37:13.392 "small_pool_count": 8192, 00:37:13.392 "large_pool_count": 1024, 00:37:13.392 "small_bufsize": 8192, 00:37:13.392 "large_bufsize": 135168 00:37:13.392 } 00:37:13.392 } 00:37:13.392 ] 00:37:13.392 }, 00:37:13.392 { 00:37:13.392 "subsystem": "sock", 00:37:13.392 "config": [ 00:37:13.392 { 00:37:13.392 "method": "sock_set_default_impl", 00:37:13.392 "params": { 00:37:13.392 "impl_name": "posix" 00:37:13.392 } 00:37:13.392 }, 00:37:13.392 { 00:37:13.392 "method": "sock_impl_set_options", 00:37:13.392 "params": { 00:37:13.392 "impl_name": "ssl", 00:37:13.392 "recv_buf_size": 4096, 00:37:13.392 "send_buf_size": 4096, 00:37:13.392 "enable_recv_pipe": true, 00:37:13.392 "enable_quickack": false, 00:37:13.392 "enable_placement_id": 0, 00:37:13.392 "enable_zerocopy_send_server": true, 00:37:13.392 "enable_zerocopy_send_client": false, 00:37:13.392 "zerocopy_threshold": 0, 00:37:13.392 "tls_version": 0, 00:37:13.392 "enable_ktls": false 00:37:13.392 } 00:37:13.392 }, 00:37:13.392 { 00:37:13.392 "method": "sock_impl_set_options", 00:37:13.392 "params": { 00:37:13.392 "impl_name": "posix", 00:37:13.392 "recv_buf_size": 2097152, 00:37:13.392 "send_buf_size": 2097152, 00:37:13.393 "enable_recv_pipe": true, 00:37:13.393 "enable_quickack": false, 00:37:13.393 "enable_placement_id": 0, 00:37:13.393 "enable_zerocopy_send_server": true, 00:37:13.393 "enable_zerocopy_send_client": false, 00:37:13.393 "zerocopy_threshold": 0, 00:37:13.393 "tls_version": 0, 00:37:13.393 "enable_ktls": false 00:37:13.393 } 00:37:13.393 } 00:37:13.393 ] 00:37:13.393 }, 00:37:13.393 { 00:37:13.393 "subsystem": "vmd", 00:37:13.393 "config": [] 00:37:13.393 }, 00:37:13.393 { 00:37:13.393 "subsystem": "accel", 00:37:13.393 "config": [ 00:37:13.393 { 00:37:13.393 "method": "accel_set_options", 00:37:13.393 "params": { 00:37:13.393 "small_cache_size": 128, 00:37:13.393 "large_cache_size": 16, 00:37:13.393 "task_count": 2048, 00:37:13.393 "sequence_count": 2048, 00:37:13.393 "buf_count": 2048 00:37:13.393 } 00:37:13.393 } 00:37:13.393 ] 00:37:13.393 }, 00:37:13.393 { 00:37:13.393 "subsystem": "bdev", 00:37:13.393 "config": [ 00:37:13.393 { 00:37:13.393 "method": "bdev_set_options", 00:37:13.393 "params": { 00:37:13.393 "bdev_io_pool_size": 65535, 00:37:13.393 "bdev_io_cache_size": 256, 00:37:13.393 "bdev_auto_examine": true, 00:37:13.393 "iobuf_small_cache_size": 128, 00:37:13.393 "iobuf_large_cache_size": 16 00:37:13.393 } 00:37:13.393 }, 00:37:13.393 { 00:37:13.393 "method": "bdev_raid_set_options", 00:37:13.393 "params": { 00:37:13.393 "process_window_size_kb": 1024 00:37:13.393 } 00:37:13.393 }, 00:37:13.393 { 00:37:13.393 "method": "bdev_iscsi_set_options", 00:37:13.393 "params": { 00:37:13.393 "timeout_sec": 30 00:37:13.393 } 00:37:13.393 }, 00:37:13.393 { 00:37:13.393 "method": "bdev_nvme_set_options", 00:37:13.393 "params": { 00:37:13.393 "action_on_timeout": "none", 00:37:13.393 "timeout_us": 0, 00:37:13.393 "timeout_admin_us": 0, 00:37:13.393 "keep_alive_timeout_ms": 10000, 00:37:13.393 "arbitration_burst": 0, 00:37:13.393 "low_priority_weight": 0, 00:37:13.393 "medium_priority_weight": 0, 00:37:13.393 "high_priority_weight": 0, 00:37:13.393 "nvme_adminq_poll_period_us": 10000, 00:37:13.393 "nvme_ioq_poll_period_us": 0, 00:37:13.393 "io_queue_requests": 512, 00:37:13.393 "delay_cmd_submit": true, 00:37:13.393 "transport_retry_count": 4, 00:37:13.393 "bdev_retry_count": 3, 00:37:13.393 "transport_ack_timeout": 0, 00:37:13.393 "ctrlr_loss_timeout_sec": 0, 00:37:13.393 "reconnect_delay_sec": 0, 00:37:13.393 "fast_io_fail_timeout_sec": 0, 00:37:13.393 "disable_auto_failback": false, 00:37:13.393 "generate_uuids": false, 00:37:13.393 "transport_tos": 0, 00:37:13.393 "nvme_error_stat": false, 00:37:13.393 "rdma_srq_size": 0, 00:37:13.393 "io_path_stat": false, 00:37:13.393 "allow_accel_sequence": false, 00:37:13.393 "rdma_max_cq_size": 0, 00:37:13.393 "rdma_cm_event_timeout_ms": 0, 00:37:13.393 "dhchap_digests": [ 00:37:13.393 "sha256", 00:37:13.393 "sha384", 00:37:13.393 "sha512" 00:37:13.393 ], 00:37:13.393 "dhchap_dhgroups": [ 00:37:13.393 "null", 00:37:13.393 "ffdhe2048", 00:37:13.393 "ffdhe3072", 00:37:13.393 "ffdhe4096", 00:37:13.393 "ffdhe6144", 00:37:13.393 "ffdhe8192" 00:37:13.393 ] 00:37:13.393 } 00:37:13.393 }, 00:37:13.393 { 00:37:13.393 "method": "bdev_nvme_attach_controller", 00:37:13.393 "params": { 00:37:13.393 "name": "nvme0", 00:37:13.393 "trtype": "TCP", 00:37:13.393 "adrfam": "IPv4", 00:37:13.393 "traddr": "127.0.0.1", 00:37:13.393 "trsvcid": "4420", 00:37:13.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:13.393 "prchk_reftag": false, 00:37:13.393 "prchk_guard": false, 00:37:13.393 "ctrlr_loss_timeout_sec": 0, 00:37:13.393 "reconnect_delay_sec": 0, 00:37:13.393 "fast_io_fail_timeout_sec": 0, 00:37:13.393 "psk": "key0", 00:37:13.393 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:13.393 "hdgst": false, 00:37:13.393 "ddgst": false 00:37:13.393 } 00:37:13.393 }, 00:37:13.393 { 00:37:13.393 "method": "bdev_nvme_set_hotplug", 00:37:13.393 "params": { 00:37:13.393 "period_us": 100000, 00:37:13.393 "enable": false 00:37:13.393 } 00:37:13.393 }, 00:37:13.393 { 00:37:13.393 "method": "bdev_wait_for_examine" 00:37:13.393 } 00:37:13.393 ] 00:37:13.393 }, 00:37:13.393 { 00:37:13.393 "subsystem": "nbd", 00:37:13.393 "config": [] 00:37:13.393 } 00:37:13.393 ] 00:37:13.393 }' 00:37:13.393 [2024-07-14 01:23:02.744431] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:37:13.393 [2024-07-14 01:23:02.744513] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1328742 ] 00:37:13.393 EAL: No free 2048 kB hugepages reported on node 1 00:37:13.652 [2024-07-14 01:23:02.809425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:13.652 [2024-07-14 01:23:02.900594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:13.912 [2024-07-14 01:23:03.086117] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:14.478 01:23:03 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:14.478 01:23:03 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:14.478 01:23:03 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:14.478 01:23:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:14.478 01:23:03 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:14.736 01:23:03 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:14.736 01:23:03 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:14.736 01:23:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:14.736 01:23:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:14.736 01:23:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:14.736 01:23:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:14.736 01:23:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:14.994 01:23:04 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:14.994 01:23:04 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:14.994 01:23:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:14.994 01:23:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:14.994 01:23:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:14.994 01:23:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:14.994 01:23:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:15.252 01:23:04 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:15.252 01:23:04 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:15.252 01:23:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:15.252 01:23:04 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:15.510 01:23:04 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:15.510 01:23:04 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:15.510 01:23:04 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.bN0mnA2BOw /tmp/tmp.TFqZjFMUaI 00:37:15.510 01:23:04 keyring_file -- keyring/file.sh@20 -- # killprocess 1328742 00:37:15.510 01:23:04 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1328742 ']' 00:37:15.510 01:23:04 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1328742 00:37:15.510 01:23:04 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:15.510 01:23:04 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:15.510 01:23:04 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1328742 00:37:15.510 01:23:04 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:15.510 01:23:04 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:15.510 01:23:04 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1328742' 00:37:15.510 killing process with pid 1328742 00:37:15.510 01:23:04 keyring_file -- common/autotest_common.sh@967 -- # kill 1328742 00:37:15.510 Received shutdown signal, test time was about 1.000000 seconds 00:37:15.510 00:37:15.510 Latency(us) 00:37:15.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.510 =================================================================================================================== 00:37:15.510 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:15.510 01:23:04 keyring_file -- common/autotest_common.sh@972 -- # wait 1328742 00:37:15.769 01:23:04 keyring_file -- keyring/file.sh@21 -- # killprocess 1327276 00:37:15.769 01:23:04 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1327276 ']' 00:37:15.769 01:23:04 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1327276 00:37:15.769 01:23:04 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:15.769 01:23:04 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:15.769 01:23:04 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1327276 00:37:15.769 01:23:04 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:15.769 01:23:04 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:15.769 01:23:04 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1327276' 00:37:15.769 killing process with pid 1327276 00:37:15.769 01:23:04 keyring_file -- common/autotest_common.sh@967 -- # kill 1327276 00:37:15.769 [2024-07-14 01:23:04.954286] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:15.769 01:23:04 keyring_file -- common/autotest_common.sh@972 -- # wait 1327276 00:37:16.026 00:37:16.026 real 0m14.090s 00:37:16.026 user 0m34.764s 00:37:16.026 sys 0m3.253s 00:37:16.026 01:23:05 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:16.026 01:23:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:16.026 ************************************ 00:37:16.026 END TEST keyring_file 00:37:16.026 ************************************ 00:37:16.026 01:23:05 -- common/autotest_common.sh@1142 -- # return 0 00:37:16.026 01:23:05 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:16.026 01:23:05 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:16.027 01:23:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:16.027 01:23:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:16.027 01:23:05 -- common/autotest_common.sh@10 -- # set +x 00:37:16.027 ************************************ 00:37:16.027 START TEST keyring_linux 00:37:16.027 ************************************ 00:37:16.027 01:23:05 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:16.027 * Looking for test storage... 00:37:16.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:16.285 01:23:05 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:16.285 01:23:05 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:16.285 01:23:05 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:16.285 01:23:05 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:16.285 01:23:05 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:16.285 01:23:05 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:16.285 01:23:05 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:16.285 01:23:05 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:16.285 01:23:05 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:16.285 01:23:05 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:16.285 01:23:05 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:16.285 01:23:05 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:16.285 01:23:05 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:16.285 01:23:05 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:16.285 01:23:05 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:16.285 01:23:05 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:16.285 01:23:05 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:16.285 01:23:05 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:16.285 01:23:05 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:16.285 01:23:05 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:16.285 01:23:05 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:16.285 01:23:05 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:16.285 01:23:05 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:16.286 01:23:05 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.286 01:23:05 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.286 01:23:05 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.286 01:23:05 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:16.286 01:23:05 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:16.286 01:23:05 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:16.286 01:23:05 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:16.286 01:23:05 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:16.286 01:23:05 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:16.286 01:23:05 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:16.286 01:23:05 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:16.286 01:23:05 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:16.286 01:23:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:16.286 01:23:05 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:16.286 01:23:05 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:16.286 01:23:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:16.286 01:23:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:16.286 01:23:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:16.286 01:23:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:16.286 01:23:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:16.286 /tmp/:spdk-test:key0 00:37:16.286 01:23:05 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:16.286 01:23:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:16.286 01:23:05 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:16.286 01:23:05 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:16.286 01:23:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:16.286 01:23:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:16.286 01:23:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:16.286 01:23:05 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:16.286 01:23:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:16.286 01:23:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:16.286 /tmp/:spdk-test:key1 00:37:16.286 01:23:05 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1329101 00:37:16.286 01:23:05 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:16.286 01:23:05 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1329101 00:37:16.286 01:23:05 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1329101 ']' 00:37:16.286 01:23:05 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:16.286 01:23:05 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:16.286 01:23:05 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:16.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:16.286 01:23:05 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:16.286 01:23:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:16.286 [2024-07-14 01:23:05.602217] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:37:16.286 [2024-07-14 01:23:05.602305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329101 ] 00:37:16.286 EAL: No free 2048 kB hugepages reported on node 1 00:37:16.286 [2024-07-14 01:23:05.658599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.544 [2024-07-14 01:23:05.747355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.801 01:23:05 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:16.801 01:23:05 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:16.801 01:23:05 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:16.801 01:23:05 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:16.801 01:23:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:16.801 [2024-07-14 01:23:06.001064] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:16.801 null0 00:37:16.801 [2024-07-14 01:23:06.033106] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:16.801 [2024-07-14 01:23:06.033568] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:16.801 01:23:06 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:16.801 01:23:06 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:16.801 599197479 00:37:16.801 01:23:06 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:16.801 779156209 00:37:16.801 01:23:06 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1329234 00:37:16.801 01:23:06 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:16.801 01:23:06 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1329234 /var/tmp/bperf.sock 00:37:16.801 01:23:06 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1329234 ']' 00:37:16.801 01:23:06 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:16.801 01:23:06 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:16.801 01:23:06 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:16.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:16.801 01:23:06 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:16.801 01:23:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:16.801 [2024-07-14 01:23:06.097583] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:37:16.801 [2024-07-14 01:23:06.097654] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329234 ] 00:37:16.801 EAL: No free 2048 kB hugepages reported on node 1 00:37:16.801 [2024-07-14 01:23:06.158880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.059 [2024-07-14 01:23:06.252537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:17.059 01:23:06 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:17.059 01:23:06 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:17.059 01:23:06 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:17.059 01:23:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:17.317 01:23:06 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:17.317 01:23:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:17.574 01:23:06 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:17.574 01:23:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:17.832 [2024-07-14 01:23:07.112235] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:17.832 nvme0n1 00:37:17.832 01:23:07 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:17.832 01:23:07 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:17.832 01:23:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:17.832 01:23:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:17.832 01:23:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:17.832 01:23:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.090 01:23:07 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:18.090 01:23:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:18.090 01:23:07 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:18.090 01:23:07 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:18.090 01:23:07 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.090 01:23:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.090 01:23:07 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:18.348 01:23:07 keyring_linux -- keyring/linux.sh@25 -- # sn=599197479 00:37:18.348 01:23:07 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:18.348 01:23:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:18.348 01:23:07 keyring_linux -- keyring/linux.sh@26 -- # [[ 599197479 == \5\9\9\1\9\7\4\7\9 ]] 00:37:18.348 01:23:07 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 599197479 00:37:18.348 01:23:07 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:18.348 01:23:07 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:18.607 Running I/O for 1 seconds... 00:37:19.545 00:37:19.545 Latency(us) 00:37:19.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:19.545 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:19.545 nvme0n1 : 1.02 3696.91 14.44 0.00 0.00 34247.75 11553.75 44661.57 00:37:19.545 =================================================================================================================== 00:37:19.545 Total : 3696.91 14.44 0.00 0.00 34247.75 11553.75 44661.57 00:37:19.545 0 00:37:19.545 01:23:08 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:19.545 01:23:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:19.803 01:23:09 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:19.803 01:23:09 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:19.803 01:23:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:19.803 01:23:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:19.803 01:23:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.803 01:23:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:20.062 01:23:09 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:20.062 01:23:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:20.062 01:23:09 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:20.062 01:23:09 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:20.062 01:23:09 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:20.062 01:23:09 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:20.062 01:23:09 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:20.062 01:23:09 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:20.062 01:23:09 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:20.062 01:23:09 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:20.062 01:23:09 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:20.062 01:23:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:20.322 [2024-07-14 01:23:09.584275] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:20.322 [2024-07-14 01:23:09.584295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b1680 (107): Transport endpoint is not connected 00:37:20.322 [2024-07-14 01:23:09.585287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b1680 (9): Bad file descriptor 00:37:20.322 [2024-07-14 01:23:09.586286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:20.322 [2024-07-14 01:23:09.586307] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:20.322 [2024-07-14 01:23:09.586322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:20.322 request: 00:37:20.322 { 00:37:20.322 "name": "nvme0", 00:37:20.322 "trtype": "tcp", 00:37:20.322 "traddr": "127.0.0.1", 00:37:20.322 "adrfam": "ipv4", 00:37:20.322 "trsvcid": "4420", 00:37:20.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:20.322 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:20.322 "prchk_reftag": false, 00:37:20.322 "prchk_guard": false, 00:37:20.322 "hdgst": false, 00:37:20.322 "ddgst": false, 00:37:20.322 "psk": ":spdk-test:key1", 00:37:20.322 "method": "bdev_nvme_attach_controller", 00:37:20.322 "req_id": 1 00:37:20.322 } 00:37:20.322 Got JSON-RPC error response 00:37:20.322 response: 00:37:20.322 { 00:37:20.322 "code": -5, 00:37:20.322 "message": "Input/output error" 00:37:20.322 } 00:37:20.322 01:23:09 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:20.322 01:23:09 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:20.322 01:23:09 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:20.322 01:23:09 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:20.322 01:23:09 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:20.322 01:23:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:20.322 01:23:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:20.322 01:23:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:20.322 01:23:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:20.322 01:23:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:20.322 01:23:09 keyring_linux -- keyring/linux.sh@33 -- # sn=599197479 00:37:20.322 01:23:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 599197479 00:37:20.322 1 links removed 00:37:20.322 01:23:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:20.322 01:23:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:20.322 01:23:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:20.322 01:23:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:20.322 01:23:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:20.322 01:23:09 keyring_linux -- keyring/linux.sh@33 -- # sn=779156209 00:37:20.322 01:23:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 779156209 00:37:20.322 1 links removed 00:37:20.322 01:23:09 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1329234 00:37:20.322 01:23:09 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1329234 ']' 00:37:20.322 01:23:09 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1329234 00:37:20.322 01:23:09 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:20.322 01:23:09 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:20.322 01:23:09 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1329234 00:37:20.322 01:23:09 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:20.322 01:23:09 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:20.322 01:23:09 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1329234' 00:37:20.322 killing process with pid 1329234 00:37:20.322 01:23:09 keyring_linux -- common/autotest_common.sh@967 -- # kill 1329234 00:37:20.322 Received shutdown signal, test time was about 1.000000 seconds 00:37:20.322 00:37:20.322 Latency(us) 00:37:20.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.322 =================================================================================================================== 00:37:20.322 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:20.322 01:23:09 keyring_linux -- common/autotest_common.sh@972 -- # wait 1329234 00:37:20.583 01:23:09 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1329101 00:37:20.583 01:23:09 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1329101 ']' 00:37:20.583 01:23:09 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1329101 00:37:20.583 01:23:09 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:20.583 01:23:09 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:20.583 01:23:09 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1329101 00:37:20.583 01:23:09 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:20.583 01:23:09 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:20.583 01:23:09 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1329101' 00:37:20.583 killing process with pid 1329101 00:37:20.583 01:23:09 keyring_linux -- common/autotest_common.sh@967 -- # kill 1329101 00:37:20.583 01:23:09 keyring_linux -- common/autotest_common.sh@972 -- # wait 1329101 00:37:21.153 00:37:21.153 real 0m4.922s 00:37:21.153 user 0m9.192s 00:37:21.153 sys 0m1.469s 00:37:21.153 01:23:10 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:21.153 01:23:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:21.153 ************************************ 00:37:21.153 END TEST keyring_linux 00:37:21.153 ************************************ 00:37:21.153 01:23:10 -- common/autotest_common.sh@1142 -- # return 0 00:37:21.153 01:23:10 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:21.153 01:23:10 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:21.153 01:23:10 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:21.153 01:23:10 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:21.153 01:23:10 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:21.153 01:23:10 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:21.153 01:23:10 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:21.153 01:23:10 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:21.153 01:23:10 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:21.153 01:23:10 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:21.153 01:23:10 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:21.153 01:23:10 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:21.153 01:23:10 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:21.153 01:23:10 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:21.153 01:23:10 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:21.153 01:23:10 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:21.153 01:23:10 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:21.153 01:23:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:21.153 01:23:10 -- common/autotest_common.sh@10 -- # set +x 00:37:21.153 01:23:10 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:21.153 01:23:10 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:21.153 01:23:10 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:21.153 01:23:10 -- common/autotest_common.sh@10 -- # set +x 00:37:23.056 INFO: APP EXITING 00:37:23.056 INFO: killing all VMs 00:37:23.056 INFO: killing vhost app 00:37:23.056 INFO: EXIT DONE 00:37:23.997 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:23.997 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:23.997 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:23.997 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:23.997 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:23.997 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:23.997 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:23.997 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:23.997 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:23.997 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:23.997 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:23.997 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:23.997 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:23.997 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:23.997 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:23.997 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:23.997 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:24.942 Cleaning 00:37:24.942 Removing: /var/run/dpdk/spdk0/config 00:37:24.942 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:24.942 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:24.942 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:24.942 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:24.942 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:24.942 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:24.942 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:24.942 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:24.942 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:24.942 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:24.942 Removing: /var/run/dpdk/spdk1/config 00:37:24.942 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:24.942 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:24.942 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:25.199 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:25.199 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:25.199 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:25.199 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:25.199 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:25.199 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:25.199 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:25.199 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:25.199 Removing: /var/run/dpdk/spdk2/config 00:37:25.199 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:25.199 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:25.199 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:25.199 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:25.199 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:25.199 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:25.199 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:25.199 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:25.199 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:25.199 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:25.199 Removing: /var/run/dpdk/spdk3/config 00:37:25.199 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:25.199 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:25.199 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:25.199 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:25.199 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:25.199 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:25.199 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:25.199 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:25.199 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:25.199 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:25.199 Removing: /var/run/dpdk/spdk4/config 00:37:25.199 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:25.199 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:25.199 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:25.199 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:25.199 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:25.199 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:25.199 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:25.199 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:25.199 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:25.199 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:25.199 Removing: /dev/shm/bdev_svc_trace.1 00:37:25.199 Removing: /dev/shm/nvmf_trace.0 00:37:25.199 Removing: /dev/shm/spdk_tgt_trace.pid1009519 00:37:25.199 Removing: /var/run/dpdk/spdk0 00:37:25.199 Removing: /var/run/dpdk/spdk1 00:37:25.199 Removing: /var/run/dpdk/spdk2 00:37:25.199 Removing: /var/run/dpdk/spdk3 00:37:25.199 Removing: /var/run/dpdk/spdk4 00:37:25.199 Removing: /var/run/dpdk/spdk_pid1007900 00:37:25.199 Removing: /var/run/dpdk/spdk_pid1008634 00:37:25.199 Removing: /var/run/dpdk/spdk_pid1009519 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1009882 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1010571 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1010711 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1011430 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1011509 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1011684 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1012988 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1013919 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1014218 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1014413 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1014617 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1014806 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1014969 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1015128 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1015361 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1015617 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1017969 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1018131 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1018293 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1018415 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1018727 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1018758 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1019161 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1019170 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1019460 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1019472 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1019634 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1019762 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1020133 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1020287 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1020484 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1020654 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1020790 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1020865 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1021051 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1021290 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1021452 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1021605 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1021877 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1022036 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1022197 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1022350 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1022622 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1022783 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1022942 00:37:25.200 Removing: /var/run/dpdk/spdk_pid1023179 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1023361 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1023532 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1023683 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1023957 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1024113 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1024283 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1024476 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1024709 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1024780 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1024984 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1027172 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1080578 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1083073 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1090021 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1093810 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1096287 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1096695 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1100523 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1104362 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1104364 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1105026 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1105565 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1106218 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1106613 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1106627 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1106876 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1106894 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1106947 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1107559 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1108212 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1108857 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1109236 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1109275 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1109414 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1110295 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1111013 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1116365 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1116638 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1119140 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1122869 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1125613 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1131880 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1137072 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1138261 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1138928 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1149013 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1151182 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1176438 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1179226 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1180404 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1181717 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1181736 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1181872 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1182008 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1182390 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1184254 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1184972 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1185282 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1186893 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1187316 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1187875 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1190266 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1193634 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1197061 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1220652 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1223292 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1227060 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1227885 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1229013 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1231493 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1233732 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1237928 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1237930 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1240816 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1240951 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1241086 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1241355 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1241369 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1242434 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1243623 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1245454 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1246639 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1247874 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1249050 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1252733 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1253192 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1254474 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1255214 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1258922 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1260772 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1264178 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1267510 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1273925 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1278795 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1278797 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1291003 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1291415 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1291819 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1292239 00:37:25.458 Removing: /var/run/dpdk/spdk_pid1292802 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1293213 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1293732 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1294140 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1296554 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1296775 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1300557 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1300615 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1302325 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1307353 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1307359 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1310755 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1312154 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1313549 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1314288 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1315771 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1316570 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1321888 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1322215 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1322614 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1324159 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1324443 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1324839 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1327276 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1327289 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1328742 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1329101 00:37:25.716 Removing: /var/run/dpdk/spdk_pid1329234 00:37:25.716 Clean 00:37:25.716 01:23:15 -- common/autotest_common.sh@1451 -- # return 0 00:37:25.716 01:23:15 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:25.716 01:23:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:25.716 01:23:15 -- common/autotest_common.sh@10 -- # set +x 00:37:25.716 01:23:15 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:25.716 01:23:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:25.716 01:23:15 -- common/autotest_common.sh@10 -- # set +x 00:37:25.716 01:23:15 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:25.716 01:23:15 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:25.716 01:23:15 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:25.716 01:23:15 -- spdk/autotest.sh@391 -- # hash lcov 00:37:25.716 01:23:15 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:25.716 01:23:15 -- spdk/autotest.sh@393 -- # hostname 00:37:25.716 01:23:15 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:25.973 geninfo: WARNING: invalid characters removed from testname! 00:37:58.092 01:23:43 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:58.352 01:23:47 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:01.647 01:23:50 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:04.183 01:23:53 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:07.475 01:23:56 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:10.790 01:24:00 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:14.074 01:24:03 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:14.333 01:24:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:14.333 01:24:03 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:14.333 01:24:03 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:14.333 01:24:03 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:14.333 01:24:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.333 01:24:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.333 01:24:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.333 01:24:03 -- paths/export.sh@5 -- $ export PATH 00:38:14.333 01:24:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:14.333 01:24:03 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:14.333 01:24:03 -- common/autobuild_common.sh@444 -- $ date +%s 00:38:14.333 01:24:03 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720913043.XXXXXX 00:38:14.333 01:24:03 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720913043.91Wwr3 00:38:14.333 01:24:03 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:38:14.333 01:24:03 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:38:14.333 01:24:03 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:14.333 01:24:03 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:14.333 01:24:03 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:14.333 01:24:03 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:14.333 01:24:03 -- common/autobuild_common.sh@460 -- $ get_config_params 00:38:14.333 01:24:03 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:38:14.333 01:24:03 -- common/autotest_common.sh@10 -- $ set +x 00:38:14.333 01:24:03 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:14.333 01:24:03 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:38:14.333 01:24:03 -- pm/common@17 -- $ local monitor 00:38:14.333 01:24:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:14.333 01:24:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:14.333 01:24:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:14.333 01:24:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:14.333 01:24:03 -- pm/common@21 -- $ date +%s 00:38:14.333 01:24:03 -- pm/common@21 -- $ date +%s 00:38:14.333 01:24:03 -- pm/common@25 -- $ sleep 1 00:38:14.333 01:24:03 -- pm/common@21 -- $ date +%s 00:38:14.333 01:24:03 -- pm/common@21 -- $ date +%s 00:38:14.333 01:24:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720913043 00:38:14.333 01:24:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720913043 00:38:14.333 01:24:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720913043 00:38:14.333 01:24:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720913043 00:38:14.333 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720913043_collect-vmstat.pm.log 00:38:14.333 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720913043_collect-cpu-load.pm.log 00:38:14.333 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720913043_collect-cpu-temp.pm.log 00:38:14.333 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720913043_collect-bmc-pm.bmc.pm.log 00:38:15.273 01:24:04 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:38:15.273 01:24:04 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:38:15.273 01:24:04 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:15.273 01:24:04 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:15.273 01:24:04 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:15.273 01:24:04 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:15.273 01:24:04 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:15.273 01:24:04 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:15.273 01:24:04 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:15.273 01:24:04 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:15.273 01:24:04 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:15.273 01:24:04 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:15.273 01:24:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:15.273 01:24:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:15.273 01:24:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:15.273 01:24:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:15.273 01:24:04 -- pm/common@44 -- $ pid=1340578 00:38:15.273 01:24:04 -- pm/common@50 -- $ kill -TERM 1340578 00:38:15.273 01:24:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:15.273 01:24:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:15.273 01:24:04 -- pm/common@44 -- $ pid=1340580 00:38:15.273 01:24:04 -- pm/common@50 -- $ kill -TERM 1340580 00:38:15.273 01:24:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:15.273 01:24:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:15.273 01:24:04 -- pm/common@44 -- $ pid=1340582 00:38:15.273 01:24:04 -- pm/common@50 -- $ kill -TERM 1340582 00:38:15.273 01:24:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:15.273 01:24:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:15.273 01:24:04 -- pm/common@44 -- $ pid=1340612 00:38:15.273 01:24:04 -- pm/common@50 -- $ sudo -E kill -TERM 1340612 00:38:15.273 + [[ -n 902748 ]] 00:38:15.273 + sudo kill 902748 00:38:15.283 [Pipeline] } 00:38:15.306 [Pipeline] // stage 00:38:15.311 [Pipeline] } 00:38:15.332 [Pipeline] // timeout 00:38:15.337 [Pipeline] } 00:38:15.357 [Pipeline] // catchError 00:38:15.362 [Pipeline] } 00:38:15.382 [Pipeline] // wrap 00:38:15.388 [Pipeline] } 00:38:15.406 [Pipeline] // catchError 00:38:15.415 [Pipeline] stage 00:38:15.417 [Pipeline] { (Epilogue) 00:38:15.434 [Pipeline] catchError 00:38:15.436 [Pipeline] { 00:38:15.452 [Pipeline] echo 00:38:15.454 Cleanup processes 00:38:15.461 [Pipeline] sh 00:38:15.782 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:15.782 1340761 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:15.782 1340845 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:15.798 [Pipeline] sh 00:38:16.087 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:16.087 ++ grep -v 'sudo pgrep' 00:38:16.087 ++ awk '{print $1}' 00:38:16.087 + sudo kill -9 1340761 00:38:16.104 [Pipeline] sh 00:38:16.394 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:26.387 [Pipeline] sh 00:38:26.673 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:26.674 Artifacts sizes are good 00:38:26.691 [Pipeline] archiveArtifacts 00:38:26.699 Archiving artifacts 00:38:26.942 [Pipeline] sh 00:38:27.225 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:27.241 [Pipeline] cleanWs 00:38:27.252 [WS-CLEANUP] Deleting project workspace... 00:38:27.252 [WS-CLEANUP] Deferred wipeout is used... 00:38:27.260 [WS-CLEANUP] done 00:38:27.261 [Pipeline] } 00:38:27.282 [Pipeline] // catchError 00:38:27.296 [Pipeline] sh 00:38:27.577 + logger -p user.info -t JENKINS-CI 00:38:27.587 [Pipeline] } 00:38:27.604 [Pipeline] // stage 00:38:27.610 [Pipeline] } 00:38:27.628 [Pipeline] // node 00:38:27.634 [Pipeline] End of Pipeline 00:38:27.671 Finished: SUCCESS